[Python-checkins] peps (merge default -> default): Merge.

larry.hastings python-checkins at python.org
Sun Aug 4 10:49:00 CEST 2013


http://hg.python.org/peps/rev/4e16704af966
changeset:   5032:4e16704af966
parent:      5031:1511a7c260dc
parent:      5030:e4e32122be07
user:        Larry Hastings <larry at hastings.org>
date:        Sun Aug 04 01:46:12 2013 -0700
summary:
  Merge.

files:
  pep-0008.txt                |   299 ++-
  pep-0315.txt                |    23 +-
  pep-0394.txt                |    55 +-
  pep-0413.txt                |     2 +-
  pep-0426.txt                |  2568 ++++++++++++++--------
  pep-0426/pydist-schema.json |   329 ++
  pep-0432.txt                |    29 +-
  pep-0435.txt                |     6 +-
  pep-0439.txt                |   166 +-
  pep-0440.txt                |   361 +-
  pep-0442.txt                |     7 +-
  pep-0443.txt                |   112 +-
  pep-0445.txt                |   773 ++++++
  pep-0446.txt                |   248 ++
  pep-0447.txt                |   408 +++
  pep-0448.txt                |   247 ++
  pep-0466/test_cloexec.py    |    50 +
  pep-3150.txt                |   120 +-
  pep-3156.txt                |    30 +-
  pep2html.py                 |     2 +-
  20 files changed, 4480 insertions(+), 1355 deletions(-)


diff --git a/pep-0008.txt b/pep-0008.txt
--- a/pep-0008.txt
+++ b/pep-0008.txt
@@ -3,12 +3,13 @@
 Version: $Revision$
 Last-Modified: $Date$
 Author: Guido van Rossum <guido at python.org>,
-        Barry Warsaw <barry at python.org>
+        Barry Warsaw <barry at python.org>,
+        Nick Coghlan <ncoghlan at gmail.com>
 Status: Active
 Type: Process
 Content-Type: text/x-rst
 Created: 05-Jul-2001
-Post-History: 05-Jul-2001
+Post-History: 05-Jul-2001, 01-Aug-2013
 
 
 Introduction
@@ -23,6 +24,13 @@
 Guido's original Python Style Guide essay, with some additions from
 Barry's style guide [2]_.
 
+This style guide evolves over time as additional conventions are
+identified and past conventions are rendered obsolete by changes in
+the language itself.
+
+Many projects have their own coding style guidelines. In the event of any
+conflicts, such project-specific guides take precedence for that project.
+
 
 A Foolish Consistency is the Hobgoblin of Little Minds
 ======================================================
@@ -41,15 +49,24 @@
 judgment.  Look at other examples and decide what looks best.  And
 don't hesitate to ask!
 
-Two good reasons to break a particular rule:
+In particular: do not break backwards compatibility just to comply with
+this PEP!
 
-1. When applying the rule would make the code less readable, even for
-   someone who is used to reading code that follows the rules.
+Some other good reasons to ignore a particular guideline:
+
+1. When applying the guideline would make the code less readable, even
+   for someone who is used to reading code that follows this PEP.
 
 2. To be consistent with surrounding code that also breaks it (maybe
    for historic reasons) -- although this is also an opportunity to
    clean up someone else's mess (in true XP style).
 
+3. Because the code in question predates the introduction of the
+   guideline and there is no other reason to be modifying that code.
+
+4. When the code needs to remain compatible with older versions of
+   Python that don't support the feature recommended by the style guide.
+
 
 Code lay-out
 ============
@@ -59,9 +76,6 @@
 
 Use 4 spaces per indentation level.
 
-For really old code that you don't want to mess up, you can continue
-to use 8-space tabs.
-
 Continuation lines should align wrapped elements either vertically
 using Python's implicit line joining inside parentheses, brackets and
 braces, or using a hanging indent.  When using a hanging indent the
@@ -101,7 +115,8 @@
       var_three, var_four)
 
 The closing brace/bracket/parenthesis on multi-line constructs may
-either line up under the last item of the list, as in::
+either line up under the first non-whitespace character of the last
+line of list, as in::
 
     my_list = [
         1, 2, 3,
@@ -128,47 +143,78 @@
 Tabs or Spaces?
 ---------------
 
-Never mix tabs and spaces.
+Spaces are the preferred indentation method.
 
-The most popular way of indenting Python is with spaces only.  The
-second-most popular way is with tabs only.  Code indented with a
-mixture of tabs and spaces should be converted to using spaces
-exclusively.  When invoking the Python command line interpreter with
+Tabs should be used solely to remain consistent with code that is
+already indented with tabs.
+
+Python 3 disallows mixing the use of tabs and spaces for indentation.
+
+Python 2 code indented with a mixture of tabs and spaces should be
+converted to using spaces exclusively.
+
+When invoking the Python 2 command line interpreter with
 the ``-t`` option, it issues warnings about code that illegally mixes
 tabs and spaces.  When using ``-tt`` these warnings become errors.
 These options are highly recommended!
 
-For new projects, spaces-only are strongly recommended over tabs.
-Most editors have features that make this easy to do.
 
 Maximum Line Length
 -------------------
 
 Limit all lines to a maximum of 79 characters.
 
-There are still many devices around that are limited to 80 character
-lines; plus, limiting windows to 80 characters makes it possible to
-have several windows side-by-side.  The default wrapping on such
-devices disrupts the visual structure of the code, making it more
-difficult to understand.  Therefore, please limit all lines to a
-maximum of 79 characters.  For flowing long blocks of text (docstrings
-or comments), limiting the length to 72 characters is recommended.
+For flowing long blocks of text with fewer structural restrictions
+(docstrings or comments), the line length should be limited to 72
+characters.
+
+Limiting the required editor window width makes it possible to have
+several files open side-by-side, and works well when using code
+review tools that present the two versions in adjacent columns.
+
+The default wrapping in most tools disrupts the visual structure of the
+code, making it more difficult to understand. The limits are chosen to
+avoid wrapping in editors with the window width set to 80, even
+if the tool places a marker glyph in the final column when wrapping
+lines. Some web based tools may not offer dynamic line wrapping at all.
+
+Some teams strongly prefer a longer line length.  For code maintained
+exclusively or primarily by a team that can reach agreement on this
+issue, it is okay to increase the nominal line length from 80 to
+100 characters (effectively increasing the maximum length to 99
+characters), provided that comments and docstrings are still wrapped
+at 72 characters.  
+
+The Python standard library is conservative and requires limiting
+lines to 79 characters (and docstrings/comments to 72).
 
 The preferred way of wrapping long lines is by using Python's implied
 line continuation inside parentheses, brackets and braces.  Long lines
 can be broken over multiple lines by wrapping expressions in
 parentheses. These should be used in preference to using a backslash
-for line continuation.  Make sure to indent the continued line
-appropriately.  The preferred place to break around a binary operator
-is *after* the operator, not before it.  Some examples::
+for line continuation.
+
+Backslashes may still be appropriate at times.  For example, long,
+multiple ``with``-statements cannot use implicit continuation, so
+backslashes are acceptable::
+
+    with open('/path/to/some/file/you/want/to/read') as file_1, \
+            open('/path/to/some/file/being/written', 'w') as file_2:
+        file_2.write(file_1.read())
+
+Another such case is with ``assert`` statements.
+
+Make sure to indent the continued line appropriately.  The preferred
+place to break around a binary operator is *after* the operator, not
+before it.  Some examples::
 
     class Rectangle(Blob):
 
         def __init__(self, width, height,
                      color='black', emphasis=None, highlight=0):
             if (width == 0 and height == 0 and
-                color == 'red' and emphasis == 'strong' or
-                highlight > 100):
+                    color == 'red' and emphasis == 'strong' or
+                    highlight > 100):
                 raise ValueError("sorry, you lose")
             if width == 0 and height == 0 and (color == 'red' or
                                                emphasis is None):
@@ -198,18 +244,21 @@
 Note, some editors and web-based code viewers may not recognize
 control-L as a form feed and will show another glyph in its place.
 
-Encodings (PEP 263)
--------------------
 
-Code in the core Python distribution should always use the ASCII or
-Latin-1 encoding (a.k.a. ISO-8859-1).  For Python 3.0 and beyond,
-UTF-8 is preferred over Latin-1, see PEP 3120.
+Source File Encoding
+--------------------
 
-Files using ASCII should not have a coding cookie.  Latin-1 (or UTF-8)
-should only be used when a comment or docstring needs to mention an
-author name that requires Latin-1; otherwise, using ``\x``, ``\u`` or
-``\U`` escapes is the preferred way to include non-ASCII data in
-string literals.
+Code in the core Python distribution should always use UTF-8 (or ASCII
+in Python 2).
+
+Files using ASCII (in Python 2) or UTF-8 (in Python 3) should not have
+an encoding declaration.
+
+In the standard library, non-default encodings should be used only for
+test purposes or when a comment or docstring needs to mention an author
+name that contains non-ASCII characters; otherwise, using ``\x``,
+``\u``, ``\U``, or ``\N`` escapes is the preferred way to include
+non-ASCII data in string literals.
 
 For Python 3.0 and beyond, the following policy is prescribed for the
 standard library (see PEP 3131): All identifiers in the Python
@@ -253,11 +302,27 @@
 
   Put any relevant ``__all__`` specification after the imports.
 
-- Relative imports for intra-package imports are highly discouraged.
-  Always use the absolute package path for all imports.  Even now that
-  PEP 328 is fully implemented in Python 2.5, its style of explicit
-  relative imports is actively discouraged; absolute imports are more
-  portable and usually more readable.
+- Absolute imports are recommended, as they are usually more readable
+  and tend to be better behaved (or at least give better error
+  messages) if the import system is incorrectly configured (such as
+  when a directory inside a package ends up on ``sys.path``)::
+
+    import mypkg.sibling
+    from mypkg import sibling
+    from mypkg.sibling import example
+
+  However, explicit relative imports are an acceptable alternative to
+  absolute imports, especially when dealing with complex package layouts
+  where using absolute imports would be unnecessarily verbose::
+
+    from . import sibling
+    from .sibling import example
+
+  Standard library code should avoid complex package layouts and always
+  use absolute imports.
+
+  Implicit relative imports should *never* be used and have been removed
+  in Python 3.
 
 - When importing a class from a class-containing module, it's usually
   okay to spell this::
@@ -272,6 +337,18 @@
 
   and use "myclass.MyClass" and "foo.bar.yourclass.YourClass".
 
+- Wildcard imports (``from <module> import *``) should be avoided, as
+  they make it unclear which names are present in the namespace,
+  confusing both readers and many automated tools. There is one
+  defensible use case for a wildcard import, which is to republish an
+  internal interface as part of a public API (for example, overwriting
+  a pure Python implementation of an interface with the definitions
+  from an optional accelerator module and exactly which definitions
+  will be overwritten isn't known in advance).
+
+  When republishing names this way, the guidelines below regarding
+  public and internal interfaces still apply.
+
 
 Whitespace in Expressions and Statements
 ========================================
@@ -330,7 +407,7 @@
 
 - If operators with different priorities are used, consider adding
   whitespace around the operators with the lowest priority(ies). Use
-  your own judgement; however, never use more than one space, and
+  your own judgment; however, never use more than one space, and
   always have the same amount of whitespace on both sides of a binary
   operator.
 
@@ -747,6 +824,36 @@
   advanced callers.
 
 
+Public and internal interfaces
+------------------------------
+
+Any backwards compatibility guarantees apply only to public interfaces.
+Accordingly, it is important that users be able to clearly distinguish
+between public and internal interfaces.
+
+Documented interfaces are considered public, unless the documentation
+explicitly declares them to be provisional or internal interfaces exempt
+from the usual backwards compatibility guarantees. All undocumented
+interfaces should be assumed to be internal.
+
+To better support introspection, modules should explicitly declare the
+names in their public API using the ``__all__`` attribute. Setting
+``__all__`` to an empty list indicates that the module has no public API.
+
+Even with ``__all__`` set appropriately, internal interfaces (packages,
+modules, classes, functions, attributes or other names) should still be
+prefixed with a single leading underscore.
+
+An interface is also considered internal if any containing namespace
+(package, module or class) is considered internal.
+
+Imported names should always be considered an implementation detail.
+Other modules must not rely on indirect access to such imported names
+unless they are an explicitly documented part of the containing module's
+API, such as ``os.path`` or a package's ``__init__`` module that exposes
+functionality from submodules.
+
+
 Programming Recommendations
 ===========================
 
@@ -756,10 +863,12 @@
 
   For example, do not rely on CPython's efficient implementation of
   in-place string concatenation for statements in the form ``a += b``
-  or ``a = a + b``.  Those statements run more slowly in Jython.  In
-  performance sensitive parts of the library, the ``''.join()`` form
-  should be used instead.  This will ensure that concatenation occurs
-  in linear time across various implementations.
+  or ``a = a + b``.  This optimization is fragile even in CPython (it
+  only works for some types) and isn't present at all in implementations
+  that don't use refcounting.  In performance sensitive parts of the
+  library, the ``''.join()`` form should be used instead.  This will
+  ensure that concatenation occurs in linear time across various
+  implementations.
 
 - Comparisons to singletons like None should always be done with
   ``is`` or ``is not``, never the equality operators.
@@ -786,29 +895,59 @@
   operator.  However, it is best to implement all six operations so
   that confusion doesn't arise in other contexts.
 
-- Use class-based exceptions.
+- Always use a def statement instead of an assignment statement that binds
+  a lambda expression directly to a name.
 
-  String exceptions in new code are forbidden, and this language
-  feature has been removed in Python 2.6.
+  Yes::
 
-  Modules or packages should define their own domain-specific base
-  exception class, which should be subclassed from the built-in
-  Exception class.  Always include a class docstring.  E.g.::
+      def f(x): return 2*x
 
-      class MessageError(Exception):
-          """Base class for errors in the email package."""
+  No::
+
+      f = lambda x: 2*x
+
+  The first form means that the name of the resulting function object is
+  specifically 'f' instead of the generic '<lambda>'. This is more
+  useful for tracebacks and string representations in general. The use
+  of the assignment statement eliminates the sole benefit a lambda
+  expression can offer over an explicit def statement (i.e. that it can
+  be embedded inside a larger expression)
+
+- Derive exceptions from ``Exception`` rather than ``BaseException``.
+  Direct inheritance from ``BaseException`` is reserved for exceptions
+  where catching them is almost always the wrong thing to do.
+
+  Design exception hierarchies based on the distinctions that code
+  *catching* the exceptions is likely to need, rather than the locations
+  where the exceptions are raised. Aim to answer the question
+  "What went wrong?" programmatically, rather than only stating that
+  "A problem occurred" (see PEP 3151 for an example of this lesson being
+  learned for the builtin exception hierarchy)
 
   Class naming conventions apply here, although you should add the
-  suffix "Error" to your exception classes, if the exception is an
-  error.  Non-error exceptions need no special suffix.
+  suffix "Error" to your exception classes if the exception is an
+  error.  Non-error exceptions that are used for non-local flow control
+  or other forms of signaling need no special suffix.
 
-- When raising an exception, use ``raise ValueError('message')``
+- Use exception chaining appropriately. In Python 3, "raise X from Y"
+  should be used to indicate explicit replacement without losing the
+  original traceback.
+
+  When deliberately replacing an inner exception (using "raise X" in
+  Python 2 or "raise X from None" in Python 3.3+), ensure that relevant
+  details are transferred to the new exception (such as preserving the
+  attribute name when converting KeyError to AttributeError, or
+  embedding the text of the original exception in the new exception
+  message).
+
+- When raising an exception in Python 2, use ``raise ValueError('message')``
   instead of the older form ``raise ValueError, 'message'``.
 
-  The paren-using form is preferred because when the exception
-  arguments are long or include string formatting, you don't need to
-  use line continuation characters thanks to the containing
-  parentheses.  The older form is not legal syntax in Python 3.
+  The latter form is not legal Python 3 syntax.
+
+  The paren-using form also means that when the exception arguments are
+  long or include string formatting, you don't need to use line
+  continuation characters thanks to the containing parentheses.
 
 - When catching exceptions, mention specific exceptions whenever
   possible instead of using a bare ``except:`` clause.
@@ -838,6 +977,21 @@
      exception propagate upwards with ``raise``.  ``try...finally``
      can be a better way to handle this case.
 
+- When binding caught exceptions to a name, prefer the explicit name
+  binding syntax added in Python 2.6::
+
+      try:
+          process_data()
+      except Exception as exc:
+          raise DataProcessingFailedError(str(exc))
+
+  This is the only syntax supported in Python 3, and avoids the
+  ambiguity problems associated with the older comma-based syntax.
+
+- When catching operating system errors, prefer the explicit exception
+  hierarchy introduced in Python 3.3 over introspection of ``errno``
+  values.
+
 - Additionally, for all try/except clauses, limit the ``try`` clause
   to the absolute minimum amount of code necessary.  Again, this
   avoids masking bugs.
@@ -860,6 +1014,10 @@
           # Will also catch KeyError raised by handle_value()
           return key_not_found(key)
 
+- When a resource is local to a particular section of code, use a
+  ``with`` statement to ensure it is cleaned up promptly and reliably
+  after use. A try/finally statement is also acceptable.
+
 - Context managers should be invoked through separate functions or methods
   whenever they do something other than acquire and release resources.
   For example:
@@ -894,9 +1052,6 @@
       Yes: if foo.startswith('bar'):
       No:  if foo[:3] == 'bar':
 
-  The exception is if your code must work with Python 1.5.2 (but let's
-  hope not!).
-
 - Object type comparisons should always use isinstance() instead of
   comparing types directly. ::
 
@@ -905,11 +1060,15 @@
       No:  if type(obj) is type(1):
 
   When checking if an object is a string, keep in mind that it might
-  be a unicode string too!  In Python 2.3, str and unicode have a
+  be a unicode string too!  In Python 2, str and unicode have a
   common base class, basestring, so you can do::
 
       if isinstance(obj, basestring):
 
+  Note that in Python 3, ``unicode`` and ``basestring`` no longer exist
+  (there is only ``str``) and a bytes object is no longer a kind of
+  string (it is a sequence of integers instead)
+
 - For sequences, (strings, lists, tuples), use the fact that empty
   sequences are false. ::
 
@@ -934,6 +1093,10 @@
   annotation style.  Instead, the annotations are left for users to
   discover and experiment with useful annotation styles.
 
+  It is recommended that third party experiments with annotations use an
+  associated decorator to indicate how the annotation should be
+  interpreted.
+
   Early core developer attempts to use function annotations revealed
   inconsistent, ad-hoc annotation styles.  For example:
 
@@ -991,6 +1154,8 @@
 
 .. [3] http://www.wikipedia.com/wiki/CamelCase
 
+.. [4] PEP 8 modernisation, July 2013
+   http://bugs.python.org/issue18472
 
 Copyright
 =========
diff --git a/pep-0315.txt b/pep-0315.txt
--- a/pep-0315.txt
+++ b/pep-0315.txt
@@ -4,7 +4,7 @@
 Last-Modified: $Date$
 Author: Raymond Hettinger <python at rcn.com>
         W Isaac Carroll <icarroll at pobox.com>
-Status: Deferred
+Status: Rejected
 Type: Standards Track
 Content-Type: text/plain
 Created: 25-Apr-2003
@@ -21,19 +21,32 @@
 
 Notice
 
-    Deferred; see
+    Rejected; see
+    http://mail.python.org/pipermail/python-ideas/2013-June/021610.html
+
+    This PEP has been deferred since 2006; see
     http://mail.python.org/pipermail/python-dev/2006-February/060718.html
 
     Subsequent efforts to revive the PEP in April 2009 did not
     meet with success because no syntax emerged that could
-    compete with a while-True and an inner if-break.
+    compete with the following form:
 
-    A syntax was found for a basic do-while loop but it found
-    had little support because the condition was at the top:
+        while True:
+            <setup code>
+            if not <condition>:
+                break
+            <loop body>
+
+    A syntax alternative to the one proposed in the PEP was found for
+    a basic do-while loop but it gained little support because the
+    condition was at the top:
 
         do ... while <cond>:
             <loop body>
 
+    Users of the language are advised to use the while-True form with
+    an inner if-break when a do-while loop would have been appropriate.
+
 
 Motivation
 
diff --git a/pep-0394.txt b/pep-0394.txt
--- a/pep-0394.txt
+++ b/pep-0394.txt
@@ -19,10 +19,17 @@
 be portable across ``*nix`` systems, regardless of the default version of the
 Python interpreter (i.e. the version invoked by the ``python`` command).
 
-* ``python2`` will refer to some version of Python 2.x
-* ``python3`` will refer to some version of Python 3.x
-* ``python`` *should* refer to the same target as ``python2`` but *may*
-  refer to ``python3`` on some bleeding edge distributions
+* ``python2`` will refer to some version of Python 2.x.
+* ``python3`` will refer to some version of Python 3.x.
+* for the time being, all distributions *should* ensure that ``python``
+  refers to the same target as ``python2``.
+* however, end users should be aware that ``python`` refers to ``python3``
+  on at least Arch Linux (that change is what prompted the creation of this
+  PEP), so ``python`` should be used in the shebang line only for scripts
+  that are source compatible with both Python 2 and 3.
+* in preparation for an eventual change in the default version of Python,
+  Python 2 only scripts should either be updated to be source compatible
+  with Python 3 or else to use ``python2`` in the shebang line.
 
 
 Recommendation
@@ -103,15 +110,29 @@
 system. They will hopefully be helpful to any distributions considering
 making such a change.
 
-* Distributions that only include ``python3`` in their base install (i.e.
-  they do not provide ``python2`` by default) along with those that are
-  aggressively trying to reach that point (and are willing to break third
-  party scripts while attempting to get there) are already beginning to alias
-  the ``python`` command to ``python3``
-* More conservative distributions that are less willing to tolerate breakage
-  of third party scripts continue to alias it to ``python2``. Until the
-  conventions described in this PEP are more widely adopted, having ``python``
-  invoke ``python2`` will remain the recommended option.
+* The main barrier to a distribution switching the ``python`` command from
+  ``python2`` to ``python3`` isn't breakage within the distribution, but
+  instead breakage of private third party scripts developed by sysadmins
+  and other users. Updating the ``python`` command to invoke ``python3``
+  by default indicates that a distribution is willing to break such scripts
+  with errors that are potentially quite confusing for users that aren't
+  yet familiar with the backwards incompatible changes in Python 3. For
+  example, while the change of ``print`` from a statement to a builtin
+  function is relatively simple for automated converters to handle, the
+  SyntaxError from attempting to use the Python 2 notation in Python 3 is
+  thoroughly confusing if you aren't already aware of the change::
+
+      $ python3 -c 'print "Hello, world!"'
+        File "<string>", line 1
+          print "Hello, world!"
+                              ^
+      SyntaxError: invalid syntax
+
+* Avoiding breakage of such third party scripts is the key reason this
+  PEP recommends that ``python`` continue to refer to ``python2`` for the
+  time being. Until the conventions described in this PEP are more widely
+  adopted, having ``python`` invoke ``python2`` will remain the recommended
+  option.
 * The ``pythonX.X`` (e.g. ``python2.6``) commands exist on some systems, on
   which they invoke specific minor versions of the Python interpreter. It
   can be useful for distribution-specific packages to take advantage of these
@@ -148,10 +169,13 @@
   ``python`` command is only executed in an interactive manner as a user
   convenience, or to run scripts that are source compatible with both Python
   2 and Python 3.
+* one symbolic date being considered for a possible change to the official
+  recommendation in this PEP is the planned switch of Python 2.7 from full
+  maintenance to security update only status in 2015 (see PEP 373).
 
 
 Backwards Compatibility
-=========================
+=======================
 
 A potential problem can arise if a script adhering to the
 ``python2``/``python3`` convention is executed on a system not supporting
@@ -217,7 +241,8 @@
 This PEP deliberately excludes any proposals relating to Microsoft Windows, as
 devising an equivalent solution for Windows was deemed too complex to handle
 here. PEP 397 and the related discussion on the python-dev mailing list
-address this issue.
+address this issue (like this PEP, the PEP 397 launcher invokes Python 2 by
+default if versions of both Python 2 and 3 are installed on the system).
 
 
 References
diff --git a/pep-0413.txt b/pep-0413.txt
--- a/pep-0413.txt
+++ b/pep-0413.txt
@@ -627,7 +627,7 @@
           <news entries>
         # Add maint.1, compat.1 etc as releases are made
 
-Putting the version information in the directory heirarchy isn't strictly
+Putting the version information in the directory hierarchy isn't strictly
 necessary (since the NEWS file generator could figure out from the version
 history), but does make it easier for *humans* to keep the different versions
 in order.
diff --git a/pep-0426.txt b/pep-0426.txt
--- a/pep-0426.txt
+++ b/pep-0426.txt
@@ -12,7 +12,8 @@
 Content-Type: text/x-rst
 Requires: 440
 Created: 30 Aug 2012
-Post-History: 14 Nov 2012, 5 Feb 2013, 7 Feb 2013, 9 Feb 2013, 27-May-2013
+Post-History: 14 Nov 2012, 5 Feb 2013, 7 Feb 2013, 9 Feb 2013,
+              27 May 2013, 20 Jun 2013, 23 Jun 2013, 14 Jul 2013
 Replaces: 345
 
 
@@ -21,8 +22,7 @@
 
 This PEP describes a mechanism for publishing and exchanging metadata
 related to Python distributions. It includes specifics of the field names,
-and their semantics and
-usage.
+and their semantics and usage.
 
 This document specifies version 2.0 of the metadata format.
 Version 1.0 is specified in PEP 241.
@@ -42,7 +42,9 @@
 
    "I" in this doc refers to Nick Coghlan. Daniel and Donald either wrote or
    contributed to earlier versions, and have been providing feedback as this
-   initial draft of the JSON-based rewrite has taken shape.
+   JSON-based rewrite has taken shape. Daniel and Donald have also been
+   vetting the proposal as we go to ensure it is practical to implement for
+   both clients and index servers.
 
    Metadata 2.0 represents a major upgrade to the Python packaging ecosystem,
    and attempts to incorporate experience gained over the 15 years(!) since
@@ -61,14 +63,15 @@
    * this PEP, covering the core metadata format
    * PEP 440, covering the versioning identification and selection scheme
    * a new PEP to define v2.0 of the sdist format
-   * an updated wheel PEP (v1.1) to add pymeta.json
-   * an updated installation database PEP both for pymeta.json and to add
-     a linking scheme to better support runtime selection of dependencies,
-     as well as recording which extras are currently available
+   * an updated wheel PEP (v1.1) to add pydist.json (and possibly convert
+     the wheel metadata file from Key:Value to JSON)
+   * an updated installation database PEP both for pydist.json (and possibly convert
+     the wheel metadata file from Key:Value to JSON)
+   * an alternative to \*.pth files that avoids system global side effects
+     and better supports runtime selection of dependencies
    * a new static config PEP to standardise metadata generation and
      creation of sdists
-   * PEP 439, covering a bootstrapping mechanism for ``pip``
-   * a distutils upgrade PEP, adding metadata v2.0 and wheel support.
+   * a PEP to cover bundling ``pip`` with the CPython installers
 
    It's going to take a while to work through all of these and make them
    a reality. The main change from our last attempt at this is that we're
@@ -84,137 +87,287 @@
    an irrelevant distraction for future readers.
 
 
-Definitions
-===========
+A Note on Time Frames
+=====================
+
+There's a lot of work going on in the Python packaging space at the moment.
+In the near term (up until the release of Python 3.4), those efforts will be
+focused on the existing metadata standards, both those defined in Python
+Enhancement Proposals, and the de facto standards defined by the setuptools
+project.
+
+This PEP is about setting out a longer term goal for the ecosystem that
+captures those existing capabilities in a format that is easier to work
+with. There are still a number of key open questions (mostly related to
+source based distribution), and those won't be able to receive proper
+attention from the development community until the other near term
+concerns have been resolved.
+
+
+Purpose
+=======
+
+The purpose of this PEP is to define a common metadata interchange format
+for communication between software publication tools and software integration
+tools in the Python ecosystem. One key aim is to support full dependency
+analysis in that ecosystem without requiring the execution of arbitrary
+Python code by those doing the analysis. Another aim is to encourage good
+software distribution practices by default, while continuing to support the
+current practices of almost all existing users of the Python Package Index
+(both publishers and integrators).
+
+The design draws on the Python community's 15 years of experience with
+distutils based software distribution, and incorporates ideas and concepts
+from other distribution systems, including Python's setuptools, pip and
+other projects, Ruby's gems, Perl's CPAN, Node.js's npm, PHP's composer
+and Linux packaging systems such as RPM and APT.
+
+
+Development, Distribution and Deployment of Python Software
+===========================================================
+
+The metadata design in this PEP is based on a particular conceptual model
+of the software development and distribution process. This model consists of
+the following phases:
+
+* Software development: this phase involves working with a source checkout
+  for a particular application to add features and fix bugs. It is
+  expected that developers in this phase will need to be able to build the
+  software, run the software's automated test suite, run project specific
+  utility scripts and publish the software.
+
+* Software publication: this phase involves taking the developed software
+  and making it available for use by software integrators. This includes
+  creating the descriptive metadata defined in this PEP, as well making the
+  software available (typically by uploading it to an index server).
+
+* Software integration: this phase involves taking published software
+  components and combining them into a coherent, integrated system. This
+  may be done directly using Python specific cross-platform tools, or it may
+  be handled through conversion to development language neutral platform
+  specific packaging systems.
+
+* Software deployment: this phase involves taking integrated software
+  components and deploying them on to the target system where the software
+  will actually execute.
+
+The publication and integration phases are collectively referred to as
+the distribution phase, and the individual software components distributed
+in that phase are referred to as "distributions".
+
+The exact details of these phases will vary greatly for particular use cases.
+Deploying a web application to a public Platform-as-a-Service provider,
+publishing a new release of a web framework or scientific library,
+creating an integrated Linux distribution or upgrading a custom application
+running in a secure enclave are all situations this metadata design should
+be able to handle.
+
+The complexity of the metadata described in this PEP thus arises directly
+from the actual complexities associated with software development,
+distribution and deployment in a wide range of scenarios.
+
+
+Supporting definitions
+----------------------
 
 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
 "SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
 document are to be interpreted as described in RFC 2119.
 
-"Distributions" are deployable software components published through an index
-server or otherwise made available for installation.
-
-"Versions" are uniquely identified snapshots of a distribution.
-
-"Distribution archives" are the packaged files which are used to publish
-and distribute the software.
-
-"Source archives" require build tools to be available on the target
-system.
+"Projects" are software components that are made available for integration.
+Projects include Python libraries, frameworks, scripts, plugins,
+applications, collections of data or other resources, and various
+combinations thereof. Public Python projects are typically registered on
+the `Python Package Index`_.
+
+"Releases" are uniquely identified snapshots of a project.
+
+"Distributions" are the packaged files which are used to publish
+and distribute a release.
+
+"Source archive" and "VCS checkout" both refer to the raw source code for
+a release, prior to creation of an sdist or binary archive.
+
+An "sdist" is a publication format providing the distribution metadata and
+and any source files that are essential to creating a binary archive for
+the distribution. Creating a binary archive from an sdist requires that
+the appropriate build tools be available on the system.
 
 "Binary archives" only require that prebuilt files be moved to the correct
 location on the target system. As Python is a dynamically bound
-cross-platform language, many "binary" archives will contain only pure
-Python source code.
+cross-platform language, many so-called "binary" archives will contain only
+pure Python source code.
+
+"Contributors" are individuals and organizations that work together to
+develop a software component.
+
+"Publishers" are individuals and organizations that make software components
+available for integration (typically by uploading distributions to an
+index server)
+
+"Integrators" are individuals and organizations that incorporate published
+distributions as components of an application or larger system.
 
 "Build tools" are automated tools intended to run on development systems,
 producing source and binary distribution archives. Build tools may also be
-invoked by installation tools in order to install software distributed as
-source archives rather than prebuilt binary archives.
+invoked by integration tools in order to build software distributed as
+sdists rather than prebuilt binary archives.
 
 "Index servers" are active distribution registries which publish version and
 dependency metadata and place constraints on the permitted metadata.
 
+"Public index servers" are index servers which allow distribution uploads
+from untrusted third parties. The `Python Package Index`_ is a public index
+server.
+
 "Publication tools" are automated tools intended to run on development
 systems and upload source and binary distribution archives to index servers.
 
-"Installation tools" are automated tools intended to run on production
-systems, consuming source and binary distribution archives from an index
-server or other designated location and deploying them to the target system.
+"Integration tools" are automated tools that consume the metadata and
+distribution archives published by an index server or other designated
+source, and make use of them in some fashion, such as installing them or
+converting them to a platform specific packaging format.
+
+"Installation tools" are integration tools specifically intended to run on
+deployment targets, consuming source and binary distribution archives from
+an index server or other designated location and deploying them to the target
+system.
 
 "Automated tools" is a collective term covering build tools, index servers,
-publication tools, installation tools and any other software that produces
+publication tools, integration tools and any other software that produces
 or consumes distribution version and dependency metadata.
 
-"Projects" refers to the developers that manage the creation of a particular
-distribution.
-
 "Legacy metadata" refers to earlier versions of this metadata specification,
 along with the supporting metadata file formats defined by the
 ``setuptools`` project.
 
-
-Development and distribution activities
-=======================================
-
-Making effective use of a common metadata format requires a common
-understanding of the most complex development and distribution model
-the format is intended to support. The metadata format described in this
-PEP is based on the following activities:
-
-* Development: during development, a user is operating from a
-  source checkout (or equivalent) for the current project. Dependencies must
-  be available in order to build, test and create a source archive of the
-  distribution.
-
-  .. note::
-     As a generated file, the full distribution metadata often won't be
-     available in a raw source checkout or tarball. In such cases, the
-     relevant distribution metadata is generally obtained from another
-     location, such as the last published release, or by generating it
-     based on a command given in a standard input file. This spec
-     deliberately avoids handling that scenario, instead falling back on
-     the existing ``setup.py`` functionality.
-
-* Build: the build step is the process of turning a source archive into a
-  binary archive. Dependencies must be available in order to build and
-  create a binary archive of the distribution (including any documentation
-  that is installed on target systems).
-
-* Deployment: the deployment phase consists of two subphases:
-
-  * Installation: the installation phase involves getting the distribution
-    and all of its runtime dependencies onto the target system. In this
-    phase, the distribution may already be on the system (when upgrading or
-    reinstalling) or else it may be a completely new installation.
-
-  * Usage: the usage phase, also referred to as "runtime", is normal usage
-    of the distribution after it has been installed on the target system.
-
-The metadata format described in this PEP is designed to enable the
-following:
-
-* It should be practical to have separate development systems, build systems
-  and deployment systems.
-* It should be practical to install dependencies needed specifically to
-  build source archives only on development systems.
-* It should be practical to install dependencies needed specifically to
-  build the software only on development and build systems, as well as
-  optionally on deployment systems if installation from source archives
-  is needed.
-* It should be practical to install dependencies needed to run the
-  distribution only on development and deployment systems.
-* It should be practical to install the dependencies needed to run a
-  distribution's test suite only on development systems, as well as
-  optionally on deployment systems.
-* It should be practical for repackagers to separate out the build
-  dependencies needed to build the application itself from those required
-  to build its documentation (as the documentation often doesn't need to
-  be rebuilt when porting an application to a different platform).
+"Distro" is used as the preferred term for Linux distributions, to help
+avoid confusion with the Python-specific meaning of the term "distribution".
+
+"Dist" is the preferred abbreviation for "distributions" in the sense defined
+in this PEP.
+
+"Qualified name" is a dotted Python identifier. For imported modules and
+packages, the qualified name is available as the ``__name__`` attribute,
+while for functions and classes it is available as the ``__qualname__``
+attribute.
+
+A "fully qualified name" uniquely locates an object in the Python module
+namespace. For imported modules and packages, it is the same as the
+qualified name. For other Python objects, the fully qualified name consists
+of the qualified name of the containing module or package, a colon (``:``)
+and the qualified name of the object relative to the containing module or
+package.
+
+
+Integration and deployment of distributions
+-------------------------------------------
+
+The primary purpose of the distribution metadata is to support integration
+and deployment of distributions as part of larger applications and systems.
+
+Integration and deployment can in turn be broken down into further substeps.
+
+* Build: the build step is the process of turning a VCS checkout, source
+  archive or sdist into a binary archive. Dependencies must be available
+  in order to build and create a binary archive of the distribution
+  (including any documentation that is installed on target systems).
+
+* Installation: the installation step involves getting the distribution
+  and all of its runtime dependencies onto the target system. In this
+  step, the distribution may already be on the system (when upgrading or
+  reinstalling) or else it may be a completely new installation.
+
+* Runtime: this is normal usage of a distribution after it has been
+  installed on the target system.
+
+These three steps may all occur directly on the target system. Alternatively
+the build step may be separated out by using binary archives provided by the
+publisher of the distribution, or by creating the binary archives on a
+separate system prior to deployment. The advantage of the latter approach
+is that it minimizes the dependencies that need to be installed on
+deployment targets (as the build dependencies will be needed only on the
+build systems).
+
+The published metadata for distributions SHOULD allow integrators, with the
+aid of build and integration tools, to:
+
+* obtain the original source code that was used to create a distribution
+* identify and retrieve the dependencies (if any) required to use a
+  distribution
+* identify and retrieve the dependencies (if any) required to build a
+  distribution from source
+* identify and retrieve the dependencies (if any) required to run a
+  distribution's test suite
+* find resources on using and contributing to the project
+* access sufficiently rich metadata to support contacting distribution
+  publishers through appropriate channels, as well as finding distributions
+  that are relevant to particular problems
+
+
+Development and publication of distributions
+--------------------------------------------
+
+The secondary purpose of the distribution metadata is to support effective
+collaboration amongst software contributors and publishers during the
+development phase.
+
+The published metadata for distributions SHOULD allow contributors
+and publishers, with the aid of build and publication tools, to:
+
+* perform all the same activities needed to effectively integrate and
+  deploy the distribution
+* identify and retrieve the additional dependencies needed to develop and
+  publish the distribution
+* specify the dependencies (if any) required to use the distribution
+* specify the dependencies (if any) required to build the distribution
+  from source
+* specify the dependencies (if any) required to run the distribution's
+  test suite
+* specify the additional dependencies (if any) required to develop and
+  publish the distribution
+
+
+Standard build system
+---------------------
 
 .. note::
 
-    This "most complex supported scenario" is almost *exactly* what has to
-    happen to get an upstream Python package into a Linux distribution, and
-    is why the current crop of automatic Python metadata -> Linux distro
-    metadata converters have some serious issues, at least from the point of
-    view of complying with distro packaging policies: the information
-    they need to comply with those policies isn't available from the
-    upstream projects, and all current formats for publishing it are
-    distro specific. This means either upstreams have to maintain metadata
-    for multiple distributions (which rarely happens) or else repackagers
-    have to do a lot of work manually in order to separate out these
-    dependencies in a  way that complies with those policies.
-
-    One thing this PEP aims to do is define a metadata format that at least
-    has the *potential* to provide the info repackagers need, thus allowing
-    upstream Python projects and Linux distro repackagers to collaborate more
-    effectively (and, ideally, make it possible to reliably automate
-    the process of converting upstream Python distributions into policy
-    compliant distro packages).
-
-    Some items in this section (and the contents of this note) will likely
-    end up moving down to the "Rationale for changes from PEP 345" section.
+   The standard build system currently described in the PEP is a draft based
+   on existing practices for projects using distutils or setuptools as their
+   build system (or other projects, like ``d2to1``, that expose a setup.py
+   file for backwards compatibility with existing tools)
+
+   The specification doesn't currently cover expected argument support for
+   the commands, which is a limitation that needs to be addressed before the
+   PEP can be considered ready for acceptance.
+
+   It is also possible that the "meta build system" will be separated out
+   into a distinct PEP in the coming months (similar to the separation of
+   the versioning and requirement specification standard out to PEP 440).
+
+   If a `suitable API can be worked out <Metabuild system>`__, then it may
+   even be possible to switch to a more declarative API for build system
+   specification.
+
+Both development and integration of distributions relies on the ability to
+build extension modules and perform other operations in a distribution
+independent manner.
+
+The current iteration of the metadata relies on the
+``distutils``/``setuptools`` commands system to support these necessary
+development and integration activities:
+
+* ``python setup.py dist_info``: generate distribution metadata in place
+  given a source archive or VCS checkout
+* ``python setup.py sdist``: create an sdist from a source archive
+  or VCS checkout
+* ``python setup.py build_ext --inplace``: build extension modules in place
+  given an sdist, source archive or VCS checkout
+* ``python setup.py test``: run the distribution's test suite in place
+  given an sdist, source archive or VCS checkout
+* ``python setup.py bdist_wheel``: create a binary archive from an sdist,
+  source archive or VCS checkout
 
 
 Metadata format
@@ -247,22 +400,30 @@
 Automated tools MAY automatically derive valid values from other
 information sources (such as a version control system).
 
+Automated tools, especially public index servers, MAY impose additional
+length restrictions on metadata beyond those enumerated in this PEP. Such
+limits SHOULD be imposed where necessary to protect the integrity of a
+service, based on the available resources and the service provider's
+judgment of reasonable metadata capacity requirements.
+
 
 Metadata files
 --------------
 
-The information defined in this PEP is serialised to ``pymeta.json``
-files for some use cases. As indicated by the extension, these
-are JSON-encoded files. Each file consists of a single serialised mapping,
-with fields as described in this PEP.
+The information defined in this PEP is serialised to ``pydist.json``
+files for some use cases. These are files containing UTF-8 encoded JSON
+metadata.
+
+Each metadata file consists of a single serialised mapping, with fields as
+described in this PEP.
 
 There are three standard locations for these metadata files:
 
-* as a ``{distribution}-{version}.dist-info/pymeta.json`` file in an
+* as a ``{distribution}-{version}.dist-info/pydist.json`` file in an
   ``sdist`` source distribution archive
-* as a ``{distribution}-{version}.dist-info/pymeta.json`` file in a ``wheel``
+* as a ``{distribution}-{version}.dist-info/pydist.json`` file in a ``wheel``
   binary distribution archive
-* as a ``{distribution}-{version}.dist-info/pymeta.json`` file in a local
+* as a ``{distribution}-{version}.dist-info/pydist.json`` file in a local
   Python installation database
 
 .. note::
@@ -270,21 +431,16 @@
    These locations are to be confirmed, since they depend on the definition
    of sdist 2.0 and the revised installation database standard. There will
    also be a wheel 1.1 format update after this PEP is approved that
-   mandates 2.0+ metadata.
+   mandates provision of 2.0+ metadata.
 
 Other tools involved in Python distribution may also use this format.
 
-It is expected that these metadata files will be generated by build tools
-based on other input formats (such as ``setup.py``) rather than being
-edited by hand.
-
-.. note::
-
-    It may be appropriate to add a "./setup.py dist_info" command to
-    setuptools to allow just the sdist metadata files to be generated
-    without having to build the full sdist archive. This would be
-    similar to the existing "./setup.py egg_info" command in setuptools,
-    which would continue to emit the legacy metadata format.
+As JSON files are generally awkward to edit by hand, it is RECOMMENDED
+that these metadata files be generated by build tools based on other
+input formats (such as ``setup.py``) rather than being used directly as
+a data input format. Generating the metadata as part of the publication
+process also helps to deal with version specific fields (including the
+source URL and the version field itself).
 
 For backwards compatibility with older installation tools, metadata 2.0
 files MAY be distributed alongside legacy metadata.
@@ -292,6 +448,10 @@
 Index servers MAY allow distributions to be uploaded and installation tools
 MAY allow distributions to be installed with only legacy metadata.
 
+Automated tools MAY attempt to automatically translate legacy metadata to
+the format described in this PEP. Advice for doing so effectively is given
+in Appendix A.
+
 
 Essential dependency resolution metadata
 ----------------------------------------
@@ -304,39 +464,92 @@
 fields:
 
 * ``metadata_version``
+* ``generator``
 * ``name``
 * ``version``
-* ``build_label``
-* ``version_url``
+* ``source_label``
+* ``source_url``
 * ``extras``
-* ``requires``
-* ``may-require``
-* ``build-requires``
-* ``build-may-require``
-* ``dev-requires``
-* ``dev-may-require``
+* ``meta_requires``
+* ``run_requires``
+* ``test_requires``
+* ``build_requires``
+* ``dev_requires``
 * ``provides``
 * ``obsoleted_by``
 * ``supports_environments``
 
 When serialised to a file, the name used for this metadata set SHOULD
-be ``pymeta-minimal.json``.
-
-Abbreviated metadata
---------------------
-
-Some metadata fields have the potential to contain a lot of information
-that will rarely be referenced, greatly increasing storage requirements
-without providing significant benefits.
-
-The abbreviated metadata for a distribution consists of all fields
-*except* the following:
-
-* ``description``
-* ``contributors``
+be ``pydist-dependencies.json``.
+
+
+Export metadata
+---------------
+
+Distributions may define components that are intended for use by other
+distributions (such as plugins). As it can be beneficial to know whether or
+not a distribution defines any such exports without needing to parse any
+metadata, a suitable subset is defined for serialisation to a separate file
+in the ``dist-info`` metadata directory.
+
+The external command metadata consists of the following fields:
+
+* ``metadata_version``
+* ``generator``
+* ``name``
+* ``version``
+* ``exports``
 
 When serialised to a file, the name used for this metadata set SHOULD
-be ``pymeta-short.json``.
+be ``pydist-exports.json``.
+
+
+Command metadata
+----------------
+
+Distributions may define commands that will be available from the command
+line following installation. As it can be beneficial to know whether or not
+a distribution has such commands without needing to parse any metadata,
+a suitable subset is defined for serialisation to a separate file in the
+``dist-info`` metadata directory.
+
+The external command metadata consists of the following fields:
+
+* ``metadata_version``
+* ``generator``
+* ``name``
+* ``version``
+* ``commands``
+
+When serialised to a file, the name used for this metadata set SHOULD
+be ``pydist-commands.json``.
+
+
+Included documents
+------------------
+
+Rather than being incorporated directly into the structured metadata, some
+supporting documents are included alongside the metadata file in the
+``dist-info`` metadata directory.
+
+To accommodate the variety of existing naming conventions for these files,
+they are explicitly identified in the ``document_names`` field, rather
+than expecting index servers and other automated tools to identify them
+automatically.
+
+
+Metadata validation
+-------------------
+
+A `jsonschema <https://pypi.python.org/pypi/jsonschema>`__ description of
+the distribution metadata is `available
+<http://hg.python.org/peps/file/default/pep-0426/pydist-schema.json>`__.
+
+This schema does NOT currently handle validation of some of the more complex
+string fields (instead treating them as opaque strings).
+
+Except where otherwise noted, all URL fields in the metadata MUST comply
+with RFC 3986.
 
 
 Core metadata
@@ -376,6 +589,17 @@
     "metadata_version": "2.0"
 
 
+Generator
+---------
+
+Name (and optional version) of the program that generated the file,
+if any.  A manually produced file would omit this field.
+
+Example::
+
+    "generator": "setuptools (0.9)"
+
+
 Name
 ----
 
@@ -391,7 +615,7 @@
 * hyphens (``-``)
 * periods (``.``)
 
-Distributions named MUST start and end with an ASCII letter or digit.
+Distribution names MUST start and end with an ASCII letter or digit.
 
 Automated tools MUST reject non-compliant names.
 
@@ -399,14 +623,14 @@
 consider hyphens and underscores to be equivalent.
 
 Index servers MAY consider "confusable" characters (as defined by the
-Unicode Consortium in `TR39: Unicode Security Mechanisms <TR39>`__) to be
+Unicode Consortium in `TR39: Unicode Security Mechanisms <TR39>`_) to be
 equivalent.
 
 Index servers that permit arbitrary distribution name registrations from
 untrusted sources SHOULD consider confusable characters to be equivalent
 when registering new distributions (and hence reject them as duplicates).
 
-Installation tools MUST NOT silently accept a confusable alternate
+Integration tools MUST NOT silently accept a confusable alternate
 spelling as matching a requested distribution name.
 
 At time of writing, the characters in the ASCII subset designated as
@@ -421,45 +645,6 @@
 
     "name": "ComfyChair"
 
-.. note::
-
-    Debian doesn't actually permit underscores in names, but that seems
-    unduly restrictive for this spec given the common practice of using
-    valid Python identifiers as Python distribution names. A Debian side
-    policy of converting underscores to hyphens seems easy enough to
-    implement (and the requirement to consider hyphens and underscores as
-    equivalent ensures that doing so won't introduce any conflicts).
-
-    We're deliberately *not* following Python 3 down the path of arbitrary
-    unicode identifiers at this time. The security implications of doing so
-    are substantially worse in the software distribution use case (it opens
-    up far more interesting attack vectors than mere code obfuscation), the
-    existing tooling really only works properly if you abide by the stated
-    restrictions and changing it would require a *lot* of work for all
-    the automated tools in the chain.
-
-    PyPI has recently been updated to reject non-compliant names for newly
-    registered projects, but existing non-compliant names are still
-    tolerated when using legacy metadata formats. Affected distributions
-    will need to change their names (typically be replacing spaces with
-    hyphens) before they can migrate to the new metadata formats.
-
-    Donald Stufft ran an analysis, and the new restrictions impact less
-    than 230 projects out of the ~31k already on PyPI. This isn't that
-    surprising given the fact that many existing tools could already
-    exhibit odd behaviour when attempting to deal with non-compliant
-    names, implicitly discouraging the use of more exotic names.
-
-    Of those projects, ~200 have the only non-compliant character as an
-    internal space (e.g. "Twisted Web"). These will be automatically
-    migrated by replacing the spaces with hyphens (e.g. "Twisted-Web"),
-    which is what you have to actually type to install these distributions
-    with ``setuptools`` (which powers both ``easy_install`` and ``pip``).
-
-    The remaining ~30 will be investigated manually and decided upon on a
-    case by case basis how to migrate them to the new naming rules (in
-    consultation with the maintainers of those projects where possible).
-
 
 Version
 -------
@@ -469,58 +654,88 @@
 variety of flexible version specification mechanisms (see PEP 440 for
 details).
 
+Version identifiers MUST comply with the format defined in PEP 440.
+
+Version identifiers MUST be unique within each project.
+
 Example::
 
     "version": "1.0a2"
 
 
-Additional identifying metadata
-===============================
-
-This section specifies fields that provide other identifying details
-that are unique to this distribution.
+Summary
+-------
+
+A short summary of what the distribution does.
+
+This field SHOULD contain fewer than 512 characters and MUST contain fewer
+than 2048.
+
+This field SHOULD NOT contain any line breaks.
+
+A more complete description SHOULD be included as a separate file in the
+sdist for the distribution. See `Document names`_ for details.
+
+Example::
+
+    "summary": "A module that is more fiendish than soft cushions."
+
+
+Source code metadata
+====================
+
+This section specifies fields that provide identifying details for the
+source code used to produce this distribution.
 
 All of these fields are optional. Automated tools MUST operate correctly if
 a distribution does not provide them, including failing cleanly when an
 operation depending on one of these fields is requested.
 
 
-Build label
------------
-
-A constrained identifying text string, as defined in PEP 440. Build labels
-cannot be used in ordered version comparisons, but may be used to select
-an exact version (see PEP 440 for details).
-
+Source label
+------------
+
+A constrained identifying text string, as defined in PEP 440. Source labels
+cannot be used in version specifiers - they are included for information
+purposes only.
+
+Source labels MUST meet the character restrictions defined in PEP 440.
+
+Source labels MUST be unique within each project and MUST NOT match any
+defined version for the project.
 
 Examples::
 
-    "build_label": "1.0.0-alpha.1"
-
-    "build_label": "1.3.7+build.11.e0f985a"
-
-    "build_label": "v1.8.1.301.ga0df26f"
-
-    "build_label": "2013.02.17.dev123"
-
-
-Version URL
------------
-
-A string containing a full URL where this specific version of the
-distribution can be downloaded.  (This means that the URL can't be
-something like ``"https://github.com/pypa/pip/archive/master.zip"``, but
-instead must be ``"https://github.com/pypa/pip/archive/1.3.1.zip"``.)
-
-Some appropriate targets for a version URL are a source tarball, an sdist
-archive or a direct reference to a tag or specific commit in an online
-version control system.
-
-All version URL references SHOULD either specify a secure transport
-mechanism (such as ``https``) or else include an expected hash value in the
-URL for verification purposes. If an insecure transport is specified without
-any hash information (or with hash information that the tool doesn't
-understand), automated tools SHOULD at least emit a warning and MAY
+    "source_label": "1.0.0-alpha.1"
+
+    "source_label": "1.3.7+build.11.e0f985a"
+
+    "source_label": "v1.8.1.301.ga0df26f"
+
+    "source_label": "2013.02.17.dev123"
+
+
+Source URL
+----------
+
+A string containing a full URL where the source for this specific version of
+the distribution can be downloaded.
+
+Source URLs MUST be unique within each project. This means that the URL
+can't be something like ``"https://github.com/pypa/pip/archive/master.zip"``,
+but instead must be ``"https://github.com/pypa/pip/archive/1.3.1.zip"``.
+
+The source URL MUST reference either a source archive or a tag or specific
+commit in an online version control system that permits creation of a
+suitable VCS checkout. It is intended primarily for integrators that
+wish to recreate the distribution from the original source form.
+
+All source URL references SHOULD specify a secure transport
+mechanism (such as ``https``), include an expected hash value in the
+URL for verification purposes, or both. If an insecure transport is specified
+without any hash information, with hash information that the tool doesn't
+understand, or with a selected hash algorithm that the tool considers too
+weak to trust, automated tools SHOULD at least emit a warning and MAY
 refuse to rely on the URL.
 
 It is RECOMMENDED that only hashes which are unconditionally provided by
@@ -530,7 +745,7 @@
 ``'sha512'``.
 
 For source archive references, an expected hash value may be specified by
-including a ``<hash-algorithm>=<expected-hash>`` as part of the URL
+including a ``<hash-algorithm>=<expected-hash>`` entry as part of the URL
 fragment.
 
 For version control references, the ``VCS+protocol`` scheme SHOULD be
@@ -542,32 +757,9 @@
 
 Example::
 
-    "version_url": "https://github.com/pypa/pip/archive/1.3.1.zip"
-    "version_url": "http://github.com/pypa/pip/archive/1.3.1.zip#sha1=da9234ee9982d4bbb3c72346a6de940a148ea686"
-    "version_url": "git+https://github.com/pypa/pip.git@1.3.1"
-
-.. note::
-
-    This was called "Download-URL" in previous versions of the metadata. It
-    has been renamed, since there are plenty of other download locations and
-    this URL is meant to be a way to get the original source for development
-    purposes (or to generate an SRPM or other platform specific equivalent).
-
-    For extra fun and games, it appears that unlike "svn+ssh://",
-    neither "git+ssh://" nor "hg+ssh://" natively support direct linking to a
-    particular tag (hg does support direct links to bookmarks through the URL
-    fragment, but that doesn't help for git and doesn't appear to be what I
-    want anyway).
-
-    However pip does have a `defined convention
-    <http://www.pip-installer.org/en/latest/logic.html#vcs-support>`__ for
-    this kind of link, which effectively splits a "URL" into "<repo>@<tag>".
-
-    The PEP simply adopts pip's existing solution to this problem.
-
-    This field is separate from the project URLs, as it's expected to
-    change for each version, while the project URLs are expected to
-    be fairly stable.
+    "source_url": "https://github.com/pypa/pip/archive/1.3.1.zip"
+    "source_url": "http://github.com/pypa/pip/archive/1.3.1.zip#sha1=da9234ee9982d4bbb3c72346a6de940a148ea686"
+    "source_url": "git+https://github.com/pypa/pip.git@1.3.1"
 
 
 Additional descriptive metadata
@@ -580,74 +772,29 @@
 a distribution does not provide them, including failing cleanly when an
 operation depending on one of these fields is requested.
 
-Summary
+
+License
 -------
 
-A one-line summary of what the distribution does.
-
-Publication tools SHOULD emit a warning if this field is not provided. Index
-servers MAY require that this field be present before allowing a
-distribution to be uploaded.
+A short string summarising the license used for this distribution.
+
+Note that distributions that provide this field should still specify any
+applicable license Trove classifiers in the `Classifiers`_ field. Even
+when an appropriate Trove classifier is available, the license summary can
+be a good way to specify a particular version of that license, or to
+indicate any variations or exception to the license.
+
+This field SHOULD contain fewer than 512 characters and MUST contain fewer
+than 2048.
+
+This field SHOULD NOT contain any line breaks.
+
+The full license text SHOULD be included as a separate file in the source
+archive for the distribution. See `Document names`_ for details.
 
 Example::
 
-    "summary": "A module that is more fiendish than soft cushions."
-
-.. note::
-
-   This used to be mandatory, and it's still highly recommended, but really,
-   nothing should break even when it's missing.
-
-
-Description
------------
-
-The distribution metadata should include a longer description of the
-distribution that may run to several paragraphs. Software that deals
-with metadata should not assume any maximum size for the description.
-
-The distribution description can be written using reStructuredText
-markup [1]_.  For programs that work with the metadata, supporting
-markup is optional; programs may also display the contents of the
-field as plain text without any special formatting.  This means that
-authors should be conservative in the markup they use.
-
-Example::
-
-    "description": "The ComfyChair module replaces SoftCushions.\\n\\nUse until lunchtime, but pause for a cup of coffee at eleven."
-
-.. note::
-
-   The difficulty of editing this field in a raw JSON file is one of the
-   main reasons this metadata interchange format is NOT recommended for
-   use as an input format for build tools.
-
-
-Description Format
-------------------
-
-A field indicating the intended format of the text in the description field.
-This allows index servers to render the description field correctly and
-provide feedback on rendering errors, rather than having to guess the
-intended format.
-
-If this field is omitted, or contains an unrecognised value, the default
-rendering format MUST be plain text.
-
-The following format names SHOULD be used for the specified markup formats:
-
-* ``txt``: Plain text (default handling if field is omitted)
-* ``rst``: reStructured Text
-* ``md``: Markdown (exact syntax variant will be implementation dependent)
-* ``adoc``: AsciiDoc
-* ``html``: HTML
-
-Automated tools MAY render one or more of the listed formats as plain
-text and MAY accept other markup formats beyond those listed.
-
-Example::
-
-    "description_format": "rst"
+    "license": "GPL version 3, excluding DRM provisions"
 
 
 Keywords
@@ -661,40 +808,6 @@
     "keywords": ["comfy", "chair", "cushions", "too silly", "monty python"]
 
 
-License
--------
-
-A string indicating the license covering the distribution where the license
-is not a simple selection from the "License" Trove classifiers. See
-Classifiers" below.  This field may also be used to specify a
-particular version of a license which is named via the ``Classifier``
-field, or to indicate a variation or exception to such a license.
-
-Example::
-
-    "license": "GPL version 3, excluding DRM provisions"
-
-
-License URL
------------
-
-A specific URL referencing the full licence text for this version of the
-distribution.
-
-Example::
-
-    "license_url": "https://github.com/pypa/pip/blob/1.3.1/LICENSE.txt"
-
-.. note::
-
-   Like Version URL, this is handled separately from the project URLs
-   as it is important that it remain accurate for this *specific*
-   version of the distribution, even if the project later switches to a
-   different license.
-
-   The project URLs field is intended for more stable references.
-
-
 Classifiers
 -----------
 
@@ -704,15 +817,64 @@
 Example::
 
     "classifiers": [
-        "Development Status :: 4 - Beta",
-        "Environment :: Console (Text Based)"
+      "Development Status :: 4 - Beta",
+      "Environment :: Console (Text Based)",
+      "License :: OSI Approved :: GNU General Public License v3 (GPLv3)"
     ]
 
 
-Contact metadata
-================
-
-Contact metadata for a distribution is provided to allow users to get
+Document names
+--------------
+
+Filenames for supporting documents included in the distribution's
+``dist-info`` metadata directory.
+
+The following supporting documents can be named:
+
+* ``description``: a file containing a long description of the distribution
+* ``license``: a file with the full text of the distribution's license
+* ``changelog``: a file describing changes made to the distribution
+
+Supporting documents MUST be included directly in the ``dist-info``
+directory. Directory separators are NOT permitted in document names.
+
+The markup format (if any) for the file is indicated by the file extension.
+This allows index servers and other automated tools to render included
+text documents correctly and provide feedback on rendering errors, rather
+than having to guess the intended format.
+
+If the filename has no extension, or the extension is not recognised, the
+default rendering format MUST be plain text.
+
+The following markup renderers SHOULD be used for the specified file
+extensions:
+
+* Plain text: ``.txt``, no extension, unknown extension
+* reStructured Text: ``.rst``
+* Markdown: ``.md``
+* AsciiDoc: ``.adoc``, ``.asc``, ``.asciidoc``
+* HTML: ``.html``, ``.htm``
+
+Automated tools MAY render one or more of the specified formats as plain
+text and MAY render other markup formats beyond those listed.
+
+Automated tools SHOULD NOT make any assumptions regarding the maximum length
+of supporting document content, except as necessary to protect the
+integrity of a service.
+
+Example::
+
+    "document_names": {
+        "description": "README.rst",
+        "license": "LICENSE.rst",
+        "changelog": "NEWS"
+    }
+
+
+Contributor metadata
+====================
+
+Contributor metadata for a distribution is provided to allow users to get
 access to more information about the distribution and its maintainers.
 
 These details are recorded as mappings with the following subfields:
@@ -720,42 +882,36 @@
 * ``name``: the name of an individual or group
 * ``email``: an email address (this may be a mailing list)
 * ``url``: a URL (such as a profile page on a source code hosting service)
-* ``type``: one of ``"author"``, ``"maintainer"``, ``"organization"``
-  or ``"individual"``
+* ``role``: one of ``"author"``, ``"maintainer"`` or ``"contributor"``
 
 The ``name`` subfield is required, the other subfields are optional.
 
-If no specific contact type is stated, the default is ``individual``.
-
-The different contact types are as follows:
+If no specific role is stated, the default is ``contributor``.
+
+Email addresses must be in the form ``local-part at domain`` where the
+local-part may be up to 64 characters long and the entire email address
+contains no more than 254 characters. The formal specification of the
+format is in RFC 5322 (sections 3.2.3 and 3.4.1) and RFC 5321, with a more
+readable form given in the informational RFC 3696 and the associated errata.
+
+The defined contributor roles are as follows:
 
 * ``author``: the original creator of a distribution
 * ``maintainer``: the current lead contributor for a distribution, when
   they are not the original creator
-* ``individual``: any other individuals involved in the creation of the
-  distribution
-* ``organization``: indicates that these contact details are for an
-  organization (formal or informal) rather than for a specific individual
-
-.. note::
-
-   This is admittedly a little complicated, but it's designed to replace the
-   Author, Author-Email, Maintainer, Maintainer-Email fields from metadata
-   1.2 in a way that allows those distinctions to be fully represented for
-   lossless translation, while allowing future distributions to pretty
-   much ignore everything other than the contact/contributor distinction
-   if they so choose.
-
-Contact metadata is optional. Automated tools MUST operate correctly if
-a distribution does not provide them, including failing cleanly when an
-operation depending on one of these fields is requested.
+* ``contributor``: any other individuals or organizations involved in the
+  creation of the distribution
+
+Contact and contributor metadata is optional. Automated tools MUST operate
+correctly if a distribution does not provide it, including failing cleanly
+when an operation depending on one of these fields is requested.
 
 
 Contacts
 --------
 
-A list of contact entries giving the recommended contact points for getting
-more information about the project.
+A list of contributor entries giving the recommended contact points for
+getting more information about the project.
 
 The example below would be suitable for a project that was in the process
 of handing over from the original author to a new lead maintainer, while
@@ -766,18 +922,17 @@
     "contacts": [
       {
         "name": "Python Packaging Authority/Distutils-SIG",
-        "type": "organization",
         "email": "distutils-sig at python.org",
         "url": "https://bitbucket.org/pypa/"
       },
       {
         "name": "Samantha C.",
-        "type": "maintainer",
+        "role": "maintainer",
         "email": "dontblameme at example.org"
       },
       {
         "name": "Charlotte C.",
-        "type": "author",
+        "role": "author",
         "email": "iambecomingasketchcomedian at example.com"
       }
     ]
@@ -786,19 +941,19 @@
 Contributors
 ------------
 
-A list of contact entries for other contributors not already listed as
+A list of contributor entries for other contributors not already listed as
 current project points of contact. The subfields within the list elements
 are the same as those for the main contact field.
 
 Example::
 
     "contributors": [
-        {"name": "John C."},
-        {"name": "Erik I."},
-        {"name": "Terry G."},
-        {"name": "Mike P."},
-        {"name": "Graeme C."},
-        {"name": "Terry J."}
+      {"name": "John C."},
+      {"name": "Erik I."},
+      {"name": "Terry G."},
+      {"name": "Mike P."},
+      {"name": "Graeme C."},
+      {"name": "Terry J."}
     ]
 
 
@@ -819,55 +974,115 @@
 Example::
 
     "project_urls": {
-        "Documentation": "https://distlib.readthedocs.org"
-        "Home": "https://bitbucket.org/pypa/distlib"
-        "Source": "https://bitbucket.org/pypa/distlib/src"
-        "Tracker": "https://bitbucket.org/pypa/distlib/issues"
+      "Documentation": "https://distlib.readthedocs.org"
+      "Home": "https://bitbucket.org/pypa/distlib"
+      "Repository": "https://bitbucket.org/pypa/distlib/src"
+      "Tracker": "https://bitbucket.org/pypa/distlib/issues"
     }
 
 
-Dependency metadata
-===================
+Semantic dependencies
+=====================
 
 Dependency metadata allows distributions to make use of functionality
 provided by other distributions, without needing to bundle copies of those
 distributions.
 
+Semantic dependencies allow publishers to indicate not only which other
+distributions are needed, but also *why* they're needed. This additional
+information allows integrators to install just the dependencies they need
+for specific activities, making it easier to minimise installation
+footprints in constrained environments (regardless of the reasons for
+those constraints).
+
+Distributions may declare five differents kinds of dependency:
+
+* "Meta" dependencies: subdistributions that are grouped together into a
+  single larger metadistribution for ease of reference and installation.
+* Runtime dependencies: other distributions that are needed to actually use
+  this distribution (but are not considered subdistributions).
+* Test dependencies: other distributions that are needed to run the
+  automated test suite for this distribution (but are not needed just to
+  use it).
+* Build dependencies: other distributions that are needed to build this
+  distribution.
+* Development dependencies: other distributions that are needed when
+  working on this distribution (but do not fit into one of the other
+  dependency categories).
+
+Within each of these categories, distributions may also declare "Extras".
+Extras are dependencies that may be needed for some optional functionality,
+or which are otherwise complementary to the distribution.
+
 Dependency management is heavily dependent on the version identification
 and specification scheme defined in PEP 440.
 
-.. note::
-
-    This substantially changes the old two-phase setup vs runtime dependency
-    model in metadata 1.2 (which was in turn derived from the setuptools
-    dependency parameters). The translation is that ``dev_requires`` and
-    ``build_requires`` both map to ``Setup-Requires-Dist``
-    in 1.2, while ``requires`` maps to ``Requires-Dist``. To go the other
-    way, ``Setup-Requires-Dist`` maps to ``build_requires`` and
-    ``Requires-Dist`` maps to ``requires``.
-
 All of these fields are optional. Automated tools MUST operate correctly if
 a distribution does not provide them, by assuming that a missing field
 indicates "Not applicable for this distribution".
 
 
-Dependency specifications
--------------------------
-
-Individual dependencies are typically defined as strings containing a
-distribution name (as found in the ``name`` field). The dependency name
+Dependency specifiers
+---------------------
+
+While many dependencies will be needed to use a distribution at all, others
+are needed only on particular platforms or only when particular optional
+features of the distribution are needed. To handle this, dependency
+specifiers are mappings with the following subfields:
+
+* ``requires``: a list of `requirement specifiers
+  <Requirement specifiers>`__ needed to satisfy the dependency
+* ``extra``: the name of a set of optional dependencies that are requested
+  and installed together. See `Extras (optional dependencies)`_ for details.
+* ``environment``: an environment marker defining the environment that
+  needs these dependencies. See `Environment markers`_ for details.
+
+``requires`` is the only required subfield. When it is the only subfield, the
+dependencies are said to be *unconditional*. If ``extra`` or ``environment``
+is specified, then the dependencies are *conditional*.
+
+All three fields may be supplied, indicating that the dependencies are
+needed only when the named extra is requested in a particular environment.
+
+Automated tools MUST combine related dependency specifiers (those with
+common values for ``extra`` and ``environment``) into a single specifier
+listing multiple requirements when serialising metadata or
+passing it to an install hook.
+
+Despite this required normalisation, the same extra name or environment
+marker MAY appear in multiple conditional dependencies. This may happen,
+for example, if an extra itself only needs some of its dependencies in
+specific environments. It is only the combination of extras and environment
+markers that is required to be unique in a list of dependency specifiers.
+
+Any extras referenced from a dependency specifier MUST be named in the
+`Extras`_ field for this distribution. This helps avoid typographical
+errors and also makes it straightforward to identify the available extras
+without scanning the full set of dependencies.
+
+
+Requirement specifiers
+----------------------
+
+Individual requirements are defined as strings containing a distribution
+name (as found in the ``name`` field). The distribution name
 may be followed by an extras specifier (enclosed in square
-brackets) and by a version specification (within parentheses).
+brackets) and by a version specifier or direct reference (within
+parentheses).
+
+Whitespace is permitted between the distribution name and an opening
+square bracket or parenthesis. Whitespace is also permitted between a
+closing square bracket and an opening parenthesis.
 
 See `Extras (optional dependencies)`_ for details on extras and PEP 440
-for details on version specifiers.
+for details on version specifiers and direct references.
 
 The distribution names should correspond to names as found on the `Python
 Package Index`_; while these names are often the same as the module names
 as accessed with ``import x``, this is not always the case (especially
 for distributions that provide multiple top level modules or packages).
 
-Example dependency specifications::
+Example requirement specifiers::
 
     "Flask"
     "Django"
@@ -877,43 +1092,6 @@
     "ComfyChair[warmup] (> 0.1)"
 
 
-Conditional dependencies
-------------------------
-
-While many dependencies will be needed to use a distribution at all, others
-are needed only on particular platforms or only when particular optional
-features of the distribution are needed. To enable this, dependency fields
-are marked as either unconditional (indicated by ``requires`` in the field
-name) or conditional (indicated by ``may_require``) in the field name.
-
-Unconditional dependency fields are lists of dependency specifications, with
-each entry indicated a required dependency.
-
-Conditional dependencies are lists of mappings with the following fields:
-
-* ``dependencies``: a list of relevant dependency specifications
-* ``extra``: the name of a set of optional dependencies that are requested
-  and installed together. See `Extras (optional dependencies)`_ for details.
-* ``environment``: an environment marker defining the environment that
-  needs these dependencies. See `Environment markers`_ for details.
-
-The ``dependencies`` field is required, as is at least one of ``extra`` and
-``environment``. All three fields may be supplied, indicating that the
-dependency is needed only when that particular set of additional
-dependencies is requested in a particular environment.
-
-Note that the same extras and environment markers MAY appear in multiple
-conditional dependencies. This may happen, for example, if an extra itself
-only needs some of its dependencies in specific environments.
-
-.. note::
-
-   Technically, you could store the conditional and unconditional
-   dependencies in a single list and switch based on the entry type
-   (string or mapping), but the ``*requires`` vs ``*may-require`` two
-   list design seems easier to understand and work with.
-
-
 Mapping dependencies to development and distribution activities
 ---------------------------------------------------------------
 
@@ -921,207 +1099,176 @@
 and development activities identified above, and govern which dependencies
 should be installed for the specified activities:
 
-* Deployment dependencies:
-
-    * ``requires``
-    * ``may_require``
-    * Request the ``test`` extra to also install
-
+* Implied runtime dependencies:
+
+    * ``meta_requires``
+    * ``run_requires``
+
+* Implied build dependencies:
+
+    * ``build_requires``
+    * If running the distribution's test suite as part of the build process,
+      request the ``:meta:``, ``:run:`` and ``:test:`` extras to also
+      install:
+
+      * ``meta_requires``
+      * ``run_requires``
       * ``test_requires``
-      * ``test_may_require``
-
-* Build dependencies:
-
+
+* Implied development and publication dependencies:
+
+    * ``meta_requires``
+    * ``run_requires``
     * ``build_requires``
-    * ``build_may_require``
-
-* Development dependencies:
-
-    * ``requires``
-    * ``may_require``
-    * ``build_requires``
-    * ``build_may_require``
     * ``test_requires``
-    * ``test_may_require``
     * ``dev_requires``
-    * ``dev_may_require``
-
-To ease compatibility with existing two phase setup/deployment toolchains,
-installation tools MAY treat ``dev_requires`` and ``dev_may_require`` as
-additions to ``build_requires`` and ``build_may_require`` rather than
-as separate fields.
-
-Installation tools SHOULD allow users to request at least the following
-operations for a named distribution:
-
-* Install the distribution and any deployment dependencies.
-* Install just the build dependencies without installing the distribution
-* Install just the development dependencies without installing
-  the distribution
-
-The notation described in `Extras (optional dependencies)`_ SHOULD be used to
-request additional optional dependencies when installing deployment
-or build dependencies.
-
-Installation tools SHOULD report an error if dependencies cannot be found,
-MUST at least emit a warning, and MAY allow the user to force the
-installation to proceed regardless.
-
-.. note::
-
-    As an example of mapping this to Linux distro packages, assume an
-    example project without any extras defined is split into 2 RPMs
-    in a SPEC file: example and example-devel
-
-    The ``requires`` and applicable ``may_require`` dependencies would be
-    mapped to the Requires dependencies for the "example" RPM (a mapping from
-    environment markers to SPEC file conditions would also allow those to
-    be handled correctly)
-
-    The ``build_requires`` and ``build_may_require`` dependencies would be
-    mapped to the BuildRequires dependencies for the "example" RPM.
-
-    All defined dependencies relevant to Linux, including those in
-    ``dev_requires`` and ``test_requires``, would become Requires
-    dependencies for the "example-devel" RPM.
-
-    If a project defines any extras, those would be mapped to additional
-    virtual RPMs with appropriate BuildRequires and Requires entries based
-    on the details of the dependency specifications.
-
-    A documentation toolchain dependency like Sphinx would either go in
-    ``build_requires`` (for example, if man pages were included in the
-    built distribution) or in ``dev_requires`` (for example, if the
-    documentation is published solely through ReadTheDocs or the
-    project website). This would be enough to allow an automated converter
-    to map it to an appropriate dependency in the spec file.
-
-
-Requires
---------
-
-A list of other distributions needed when this distribution is deployed.
-
-Example::
-
-    "requires": ["SciPy", "PasteDeploy", "zope.interface (>3.5.0)"]
+
+The notation described in `Extras (optional dependencies)`_ SHOULD be used
+to determine exactly what gets installed for various operations.
+
+Installation tools SHOULD report an error if dependencies cannot be
+satisfied, MUST at least emit a warning, and MAY allow the user to force
+the installation to proceed regardless.
+
+See Appendix B for an overview of mapping these dependencies to an RPM
+spec file.
 
 
 Extras
 ------
 
 A list of optional sets of dependencies that may be used to define
-conditional dependencies in ``"may_require"`` and similar fields. See
+conditional dependencies in dependency fields. See
 `Extras (optional dependencies)`_ for details.
 
-The extra name``"test"`` is reserved for requesting the dependencies
-specified in ``test_requires`` and ``test_may_require`` and is NOT
-permitted in this field.
+The names of extras MUST abide by the same restrictions as those for
+distribution names.
 
 Example::
 
     "extras": ["warmup"]
 
 
-May require
------------
-
-A list of other distributions that may be needed when this distribution
-is deployed, based on the extras requested and the target deployment
-environment.
-
-Any extras referenced from this field MUST be named in the `Extras`_ field.
+Meta requires
+-------------
+
+An abbreviation of "metadistribution requires". This is a list of
+subdistributions that can easily be installed and used together by
+depending on this metadistribution.
+
+In this field, automated tools:
+
+* MUST allow strict version matching
+* MUST NOT allow more permissive version specifiers.
+* MAY allow direct references
+
+Public index servers SHOULD NOT allow the use of direct references in
+uploaded distributions. Direct references are intended primarily as a
+tool for software integrators rather than publishers.
+
+Distributions that rely on direct references to platform specific binary
+archives SHOULD define appropriate constraints in their
+``supports_environments`` field.
 
 Example::
 
-        "may_require": [
-          {
-            "dependencies": ["pywin32 (>1.0)"],
-            "environment": "sys.platform == 'win32'"
-          },
-          {
-            "dependencies": ["SoftCushions"],
-            "extra": "warmup"
-          }
-        ]
+    "meta_requires":
+      {
+        "requires": ["ComfyUpholstery (== 1.0a2)",
+                     "ComfySeatCushion (== 1.0a2)"]
+      },
+      {
+        "requires": ["CupOfTeaAtEleven (== 1.0a2)"],
+        "environment": "'linux' in sys.platform"
+      }
+    ]
+
+
+Run requires
+------------
+
+A list of other distributions needed to actually run this distribution.
+
+Automated tools MUST NOT allow strict version matching clauses or direct
+references in this field - if permitted at all, such clauses should appear
+in ``meta_requires`` instead.
+
+Example::
+
+    "run_requires":
+      {
+        "requires": ["SciPy", "PasteDeploy", "zope.interface (>3.5.0)"]
+      },
+      {
+        "requires": ["pywin32 (>1.0)"],
+        "environment": "sys.platform == 'win32'"
+      },
+      {
+        "requires": ["SoftCushions"],
+        "extra": "warmup"
+      }
+    ]
+
 
 Test requires
 -------------
 
 A list of other distributions needed in order to run the automated tests
-for this distribution, either during development or when running the
-``test_installed_dist`` metabuild when deployed.
+for this distribution..
+
+Automated tools MAY disallow strict version matching clauses and direct
+references in this field and SHOULD at least emit a warning for such clauses.
+
+Public index servers SHOULD NOT allow strict version matching clauses or
+direct references in this field.
 
 Example::
 
-    "test_requires": ["unittest2"]
-
-
-Test may require
-----------------
-
-A list of other distributions that may be needed in order to run the
-automated tests for this distribution, either during development or when
-running the ``test_installed_dist`` metabuild when deployed, based on the
-extras requested and the target deployment environment.
-
-Any extras referenced from this field MUST be named in the `Extras`_ field.
-
-Example::
-
-        "test_may_require": [
-          {
-            "dependencies": ["pywin32 (>1.0)"],
-            "environment": "sys.platform == 'win32'"
-          },
-          {
-            "dependencies": ["CompressPadding"],
-            "extra": "warmup"
-          }
-        ]
+    "test_requires":
+      {
+        "requires": ["unittest2"]
+      },
+      {
+        "requires": ["pywin32 (>1.0)"],
+        "environment": "sys.platform == 'win32'"
+      },
+      {
+        "requires": ["CompressPadding"],
+        "extra": "warmup"
+      }
+    ]
 
 
 Build requires
 --------------
 
 A list of other distributions needed when this distribution is being built
-(creating a binary archive from a source archive).
+(creating a binary archive from an sdist, source archive or VCS checkout).
 
 Note that while these are build dependencies for the distribution being
 built, the installation is a *deployment* scenario for the dependencies.
 
+Automated tools MAY disallow strict version matching clauses and direct
+references in this field and SHOULD at least emit a warning for such clauses.
+
+Public index servers SHOULD NOT allow strict version matching clauses or
+direct references in this field.
+
 Example::
 
-    "build_requires": ["setuptools (>= 0.7)"]
-
-
-Build may require
------------------
-
-A list of other distributions that may be needed when this distribution
-is built (creating a binary archive from a source archive), based on the
-features requested and the build environment.
-
-Note that while these are build dependencies for the distribution being
-built, the installation is a *deployment* scenario for the dependencies.
-
-Any extras referenced from this field MUST be named in the `Extras`_ field.
-
-Automated tools MAY assume that all extras are implicitly requested when
-installing build dependencies.
-
-Example::
-
-        "build_may_require": [
-          {
-            "dependencies": ["pywin32 (>1.0)"],
-            "environment": "sys.platform == 'win32'"
-          },
-          {
-            "dependencies": ["cython"],
-            "extra": "c-accelerators"
-          }
-        ]
+    "build_requires":
+      {
+        "requires": ["setuptools (>= 0.7)"]
+      },
+      {
+        "requires": ["pywin32 (>1.0)"],
+        "environment": "sys.platform == 'win32'"
+      },
+      {
+        "requires": ["cython"],
+        "extra": "c-accelerators"
+      }
+    ]
 
 
 Dev requires
@@ -1133,42 +1280,27 @@
 
 Additional dependencies that may be listed in this field include:
 
-* tools needed to create a source archive
+* tools needed to create an sdist from a source archive or VCS checkout
 * tools needed to generate project documentation that is published online
   rather than distributed along with the rest of the software
-* additional test dependencies for tests which are not executed when the
-  test is invoked through the ``test_installed_dist`` metabuild hook (for
-  example, tests that require a local database server and web server and
-  may not work when fully installed on a production system)
+
+Automated tools MAY disallow strict version matching clauses and direct
+references in this field and SHOULD at least emit a warning for such clauses.
+
+Public index servers SHOULD NOT allow strict version matching clauses or
+direct references in this field.
 
 Example::
 
-    "dev_requires": ["hgtools", "sphinx (>= 1.0)"]
-
-
-Dev may require
----------------
-
-A list of other distributions that may be needed during development of
-this distribution, based on the features requested and the build environment.
-
-This should only be needed if the project's own utility scripts have
-platform specific dependencies that aren't already defined as deployment
-or build dependencies.
-
-Any extras referenced from this field MUST be named in the `Extras`_ field.
-
-Automated tools MAY assume that all extras are implicitly requested when
-installing development dependencies.
-
-Example::
-
-        "dev_may_require": [
-          {
-            "dependencies": ["pywin32 (>1.0)"],
-            "environment": "sys.platform == 'win32'"
-          }
-        ]
+    "dev_requires":
+      {
+        "requires": ["hgtools", "sphinx (>= 1.0)"]
+      },
+      {
+        "requires": ["pywin32 (>1.0)"],
+        "environment": "sys.platform == 'win32'"
+      }
+    ]
 
 
 Provides
@@ -1190,7 +1322,17 @@
 project is able to include a ``"provides": ["distribute"]`` entry to
 satisfy any projects that require the now obsolete distribution's name.
 
-A distribution may also provide a "virtual" project name, which does
+To avoid malicious hijacking of names, when interpreting metadata retrieved
+from a public index server, automated tools MUST NOT pay any attention to
+``"provides"`` entries that do not correspond to a published distribution.
+
+However, to appropriately handle project forks and mergers, automated tools
+MUST accept ``"provides"`` entries that name other distributions when the
+entry is retrieved from a local installation database or when there is a
+corresponding ``"obsoleted_by"`` entry in the metadata for the named
+distribution.
+
+A distribution may wish to depend on a "virtual" project name, which does
 not correspond to any separately distributed project:  such a name
 might be used to indicate an abstract capability which could be supplied
 by one of multiple projects.  For example, multiple projects might supply
@@ -1198,13 +1340,20 @@
 that it provides ``sqlalchemy-postgresql-bindings``, allowing other
 projects to depend only on having at least one of them installed.
 
-A version declaration may be supplied and must follow the rules described
-in PEP 440. The distribution's version identifier will be implied
-if none is specified.
+To handle this case in a way that doesn't allow for name hijacking, the
+authors of the distribution that first defines the virtual dependency should
+create a project on the public index server with the corresponding name, and
+depend on the specific distribution that should be used if no other provider
+is already installed. This also has the benefit of publishing the default
+provider in a way that automated tools will understand.
+
+A version declaration may be supplied as part of an entry in the provides
+field and must follow the rules described in PEP 440. The distribution's
+version identifier will be implied if none is specified.
 
 Example::
 
-    "provides": ["AnotherProject (3.4)", "virtual_package"]
+    "provides": ["AnotherProject (3.4)", "virtual-package"]
 
 
 Obsoleted by
@@ -1250,63 +1399,253 @@
 Individual entries are environment markers, as described in
 `Environment markers`_.
 
-Installation tools SHOULD report an error if supported platforms are
+Installation tools SHOULD report an error if supported environments are
 specified by the distribution and the current platform fails to match
 any of them, MUST at least emit a warning, and MAY allow the user to
 force the installation to proceed regardless.
 
-Examples::
-
+The two main uses of this field are to declare which versions of Python
+and which underlying operating systems are supported.
+
+Examples indicating supported Python versions::
+
+   # Supports Python 2.6+
+   "supports_environments": ["python_version >= '2.6'"]
+
+   # Supports Python 2.6+ (for 2.x) or 3.3+ (for 3.x)
+   "supports_environments": ["python_version >= '3.3'",
+                             "'3.0' > python_version >= '2.6'"]
+
+Examples indicating supported operating systems::
+
+   # Windows only
    "supports_environments": ["sys_platform == 'win32'"]
+
+   # Anything except Windows
    "supports_environments": ["sys_platform != 'win32'"]
+
+   # Linux or BSD only
    "supports_environments": ["'linux' in sys_platform",
                              "'bsd' in sys_platform"]
 
+Example where the supported Python version varies by platform::
+
+   # The standard library's os module has long supported atomic renaming
+   # on POSIX systems, but only gained atomic renaming on Windows in Python
+   # 3.3. A distribution that needs atomic renaming support for reliable
+   # operation might declare the following supported environments.
+   "supports_environments": ["python_version >= '2.6' and sys_platform != 'win32'",
+                             "python_version >= '3.3' and sys_platform == 'win32'"]
+
+Installed interfaces
+====================
+
+Most Python distributions expose packages and modules for import through
+the Python module namespace. Distributions may also expose other
+interfaces when installed.
+
+Export specifiers
+-----------------
+
+An export specifier is a string consisting of a fully qualified name, as
+well as an optional extra name enclosed in square brackets. This gives the
+following four possible forms for an export specifier::
+
+   module
+   module:name
+   module[requires_extra]
+   module:name[requires_extra]
 
 .. note::
 
-   This field replaces the old Platform, Requires-Platform and
-   Requires-Python fields and has been redesigned with environment
-   marker based semantics that should make it possible to reliably flag,
-   for example, Unix specific or Windows specific distributions, as well
-   as Python 2 only and Python 3 only distributions.
-
-
-Metabuild system
-================
-
-The ``metabuild_hooks`` field is used to define various operations that
-may be invoked on a distribution in a platform independent manner.
-
-The metabuild system currently defines three operations as part of the
-deployment of a distribution:
+   The jsonschema file currently restricts qualified names using the
+   Python 2 ASCII identifier rules. This may need to be reconsidered
+   given the more relaxed identifier rules in Python 3.
+
+The meaning of the subfields is as follows:
+
+* ``module``: the module providing the export
+* ``name``: if applicable, the qualified name of the export within the module
+* ``requires_extra``: indicates the export will only work correctly if the
+  additional dependencies named in the given extra are available in the
+  installed environment
+
+.. note::
+
+   I tried this as a mapping with subfields, and it made the examples below
+   unreadable. While this PEP is mostly for tool use, readability still
+   matters to some degree for debugging purposes, and because I expect
+   snippets of the format to be reused elsewhere.
+
+
+Modules
+-------
+
+A list of qualified names of modules and packages that the distribution
+provides for import.
+
+.. note::
+
+   The jsonschema file currently restricts qualified names using the
+   Python 2 ASCII identifier rules. This may need to be reconsidered
+   given the more relaxed identifier rules in Python 3.
+
+For names that contain dots, the portion of the name before the final dot
+MUST appear either in the installed module list or in the namespace package
+list.
+
+To help avoid name conflicts, it is RECOMMENDED that distributions provide
+a single top level module or package that matches the distribution name
+(or a lower case equivalent). This requires that the distribution name also
+meet the requirements of a Python identifier, which are stricter than
+those for distribution names). This practice will also make it easier to
+find authoritative sources for modules.
+
+Index servers SHOULD allow multiple distributions to publish the same
+modules, but MAY notify distribution authors of potential conflicts.
+
+Installation tools SHOULD report an error when asked to install a
+distribution that provides a module that is also provided by a different,
+previously installed, distribution.
+
+Note that attempting to import some declared modules may result in an
+exception if the appropriate extras are not installed.
+
+Example::
+
+    "modules": ["chair", "chair.cushions", "python_sketches.nobody_expects"]
+
+.. note::
+
+   Making this a list of export specifiers instead would allow a distribution
+   to declare when a particular module requires a particular extra in order
+   to run correctly. On the other hand, there's an argument to be made that
+   that is the point where it starts to become worthwhile to split out a
+   separate distribution rather than using extras.
+
+
+Namespaces
+----------
+
+A list of qualified names of namespace packages that the distribution
+contributes modules to.
+
+.. note::
+
+   The jsonschema file currently restricts qualified names using the
+   Python 2 ASCII identifier rules. This may need to be reconsidered
+   given the more relaxed identifier rules in Python 3.
+
+On versions of Python prior to Python 3.3 (which provides native namespace
+package support), installation tools SHOULD emit a suitable ``__init__.py``
+file to properly initialise the namespace rather than using a distribution
+provided file.
+
+Installation tools SHOULD emit a warning and MAY emit an error if a
+distribution declares a namespace package that conflicts with the name of
+an already installed module or vice-versa.
+
+Example::
+
+    "namespaces": ["python_sketches"]
+
+
+Commands
+--------
+
+The ``commands`` mapping contains three subfields:
+
+* ``wrap_console``: console wrapper scripts to be generated by the installer
+* ``wrap_gui``: GUI wrapper scripts to be generated by the installer
+* ``prebuilt``: scripts created by the distribution's build process and
+  installed directly to the configured scripts directory
+
+``wrap_console`` and ``wrap_gui`` are both mappings of script names to
+export specifiers. The script names must follow the same naming rules as
+distribution names.
+
+The export specifiers for wrapper scripts must refer to either a package
+with a __main__ submodule (if no ``name`` subfield is given in the export
+specifier) or else to a callable inside the named module.
+
+Installation tools should generate appropriate wrappers as part of the
+installation process.
+
+.. note::
+
+   Still needs more detail on what "appropriate wrappers" means. For now,
+   refer to what setuptools and zc.buildout generate as wrapper scripts.
+
+``prebuilt`` is a list of script paths, relative to the scripts directory in
+a wheel file or following installation. They are provided for informational
+purpose only - installing them is handled through the normal processes for
+files created when building a distribution.
+
+Index servers SHOULD allow multiple distributions to publish the same
+commands, but MAY notify distribution authors of potential conflicts.
+
+Installation tools SHOULD report an error when asked to install a
+distribution that provides a command that is also provided by a different,
+previously installed, distribution.
+
+Example::
+
+    "commands": {
+      "wrap_console": [{"wrapwithpython": "chair.run_cli"}],
+      "wrap_gui": [{"wrapwithpythonw": "chair:run_gui"}],
+      "prebuilt": ["notawrapper"]
+    }
+
+
+
+Exports
+-------
+
+The ``exports`` field is a mapping containing qualified names as keys. Each
+key identifies an export group containing one or more exports published by
+the distribution.
+
+Export group names are defined by distributions that will then make use of
+the published export information in some way. The primary use case is for
+distributions that support a plugin model: defining an export group allows
+other distributions to indicate which plugins they provide, how they
+can be imported and accessed, and which additional dependencies (if any)
+are needed for the plugin to work correctly.
+
+To reduce the chance of name conflicts, export group names SHOULD use a
+prefix that corresponds to a module name in the distribution that defines
+the meaning of the export group. This practice will also make it easier to
+find authoritative documentation for export groups.
+
+Each individual export group is then a mapping of arbitrary non-empty string
+keys to export specifiers. The meaning of export names within an export
+group is up to the distribution that defines the export group. Creating an
+appropriate definition for the export name format can allow the importing
+distribution to determine whether or not an export is relevant without
+needing to import every exporting module.
+
+
+Install hooks
+=============
+
+The ``install_hooks`` field is used to define operations to be
+invoked on the distribution  in the following situations:
 
 * Installing to a deployment system
 * Uninstalling from a deployment system
-* Running the distribution's test suite on a deployment system (hence the
-  ``test`` runtime extra)
-
-Distributions may define handles for each of these operations as an
-"entry point", a reference to a Python callable, with the module name
-separated from the reference within the module by a colon (``:``).
-
-Example metabuild hooks::
-
-    "metabuild_hooks": {
-        "postinstall": "myproject.build_hooks:postinstall",
-        "preuininstall": "myproject.build_hooks:preuninstall",
-        "test_installed_dist": "some_test_harness.metabuild_hook"
+
+Distributions may define handlers for each of these operations as an
+"entry point", which is a reference to a Python callable, with the module
+name separated from the reference within the module by a colon (``:``).
+
+Example install hooks::
+
+    "install_hooks": {
+      "postinstall": "ComfyChair.install_hooks:postinstall",
+      "preuininstall": "ComfyChair.install_hooks:preuninstall"
     }
 
-Build and installation tools MAY offer additional operations beyond the
-core metabuild operations. These operations SHOULD be composed from the
-defined metabuild operations where appropriate.
-
-Build and installation tools SHOULD support the legacy ``setup.py`` based
-commands for metabuild operations not yet defined as metabuild hooks.
-
-The metabuild hooks are gathered together into a single top level
-``metabuild_hooks`` field. The individual hooks are:
+The currently defined install hooks are:
 
 * ``postinstall``: run after the distribution has been installed to a
   target deployment system (or after it has been upgraded). If the hook is
@@ -1316,18 +1655,15 @@
   deployment system (or before it is upgraded). If the hook is not defined,
   it indicates no distribution specific actions are needed prior to
   uninstallation.
-* ``test_installed_dist``: test an installed distribution is working. If the
-  hook is not defined, it indicates the distribution does not support
-  execution of the test suite after deployment.
-
-The expected signatures of these hooks are as follows::
+
+The required signatures of these hooks are as follows::
 
     def postinstall(current_meta, previous_meta=None):
         """Run following installation or upgrade of the distribution
 
         *current_meta* is the distribution metadata for the version now
         installed on the current system
-        *previous_meta* is either missing or ``None`` (indicating a fresh
+        *previous_meta* is either omitted or ``None`` (indicating a fresh
         install) or else the distribution metadata for the version that
         was previously installed (indicating an upgrade or downgrade).
         """
@@ -1337,61 +1673,111 @@
 
         *current_meta* is the distribution metadata for the version now
         installed on the current system
-        *next_meta* is either missing or ``None`` (indicating complete
+        *next_meta* is either omitted or ``None`` (indicating complete
         uninstallation) or else the distribution metadata for the version
         that is about to be installed (indicating an upgrade or downgrade).
         """
 
-    def test_installed_dist(current_meta):
-        """Check an installed distribution is working correctly
-
-        Note that this check should always be non-destructive as it may be
-        invoked automatically by some tools.
-
-        Requires that the distribution's test dependencies be installed
-        (indicated by the ``test`` runtime extra).
-
-        Returns ``True`` if the check passes, ``False`` otherwise.
-        """
-
-Metabuild hooks MUST be called with at least abbreviated metadata, and MAY
-be called with full metadata.
-
-Where necessary, metabuild hooks check for the presence or absence of
-optional dependencies defined as extras using the same techniques used
-during normal operation of the distribution (for example, checking for
-import failures for optional dependencies).
+When install hooks are defined, it is assumed that they MUST be executed
+to obtain a properly working installation of the distribution, and to
+properly remove the distribution from a system.
+
+Install hooks SHOULD NOT be used to provide functionality that is
+expected to be provided by installation tools (such as rewriting of
+shebang lines and generation of executable wrappers for Windows).
+
+Installation tools MUST ensure the distribution is fully installed, and
+available through the import system and installation database when invoking
+install hooks.
+
+Installation tools MUST call install hooks with full metadata, rather than
+only the essential dependency resolution metadata.
+
+The given parameter names are considered part of the hook signature.
+Installation tools MUST call install hooks solely with keyword arguments.
+Install hook implementations MUST use the given parameter names.
+
+Installation tools SHOULD invoke install hooks automatically after
+installing a distribution from a binary archive.
+
+When installing from an sdist, source archive or VCS checkout, installation
+tools SHOULD create a binary archive using ``setup.py bdist_wheel`` and
+then install binary archive normally (including invocation of any install
+hooks). Installation tools SHOULD NOT invoke ``setup.py install`` directly.
+
+Installation tools SHOULD treat an exception thrown by a postinstall hook
+as a failure of the installation and revert any other changes made to the
+system.
+
+Installation tools SHOULD treat an exception thrown by a preuninstall hook
+as an indication the removal of the distribution should be aborted.
+
+Installation tools MUST NOT silently ignore install hooks, as failing
+to call these hooks may result in a misconfigured installation that fails
+unexpectedly at runtime. Installation tools MAY refuse to install
+distributions that define install hooks, or require that users
+explicitly opt in to permitting the execution of such hooks.
+
+Install hook implementations MUST NOT make any assumptions regarding the
+current working directory when they are invoked, and MUST NOT make
+persistent alterations to the working directory or any other process global
+state (other than potentially importing additional modules, or other
+expected side effects of running the distribution).
+
+Install hooks have access to the full metadata for the release being
+installed, that of the previous/next release (as appropriate), as well as
+to all the normal runtime information (such as available imports). Hook
+implementations can use this information to perform additional platform
+specific installation steps. To check for the presence or absence of
+"extras", hook implementations should use the same runtime checks that
+would be used during normal operation (such as checking for the availability
+of the relevant dependencies).
 
 
 Metadata Extensions
 ===================
 
 Extensions to the metadata may be present in a mapping under the
-'extensions' key.  The keys must meet the same restrictions as
-distribution names, while the values may be any type natively supported
-in JSON::
+'extensions' key.  The keys must be valid qualified names, while
+the values may be any type natively supported in JSON::
 
     "extensions" : {
-        "chili" : { "type" : "Poblano", "heat" : "Mild" },
-        "languages" : [ "French", "Italian", "Hebrew" ]
+      "chili" : { "type" : "Poblano", "heat" : "Mild" },
+      "languages" : [ "French", "Italian", "Hebrew" ]
     }
 
-To avoid name conflicts, it is recommended that distribution names be used
-to identify metadata extensions. This practice will also make it easier to
+Extension names are defined by distributions that will then make use of
+the additional published metadata in some way.
+
+To reduce the chance of name conflicts, extension names SHOULD use a
+prefix that corresponds to a module name in the distribution that defines
+the meaning of the extension. This practice will also make it easier to
 find authoritative documentation for metadata extensions.
 
+Metadata extensions allow development tools to record information in the
+metadata that may be useful during later phases of distribution. For
+example, a build tool could include default build options in a metadata
+extension when creating an sdist, and use those when creating the wheel
+files later.
+
 
 Extras (optional dependencies)
 ==============================
 
 Extras are additional dependencies that enable an optional aspect
-of the distribution, generally corresponding to a ``try: import
+of the distribution, often corresponding to a ``try: import
 optional_dependency ...`` block in the code.  To support the use of the
 distribution with or without the optional dependencies they are listed
 separately from the distribution's core dependencies and must be requested
 explicitly, either in the dependency specifications of another distribution,
 or else when issuing a command to an installation tool.
 
+Note that installation of extras is not tracked directly by installation
+tools: extras are merely a convenient way to indicate a set of dependencies
+that is needed to provide some optional functionality of the distribution.
+If selective *installation* of components is desired, then multiple
+distributions must be defined rather than relying on the extras system.
+
 The names of extras MUST abide by the same restrictions as those for
 distribution names.
 
@@ -1399,15 +1785,15 @@
 
     "name": "ComfyChair",
     "extras": ["warmup", "c-accelerators"]
-    "may_require": [
+    "run_requires": [
       {
-        "dependencies": ["SoftCushions"],
+        "requires": ["SoftCushions"],
         "extra": "warmup"
       }
     ]
-    "build_may_require": [
+    "build_requires": [
       {
-        "dependencies": ["cython"],
+        "requires": ["cython"],
         "extra": "c-accelerators"
       }
     ]
@@ -1416,15 +1802,34 @@
 relevant extra names inside square brackets after the distribution name when
 specifying the dependency.
 
-Extra specifications MUST support the following additional syntax:
-
-* Multiple features can be requested by separating them with a comma within
+Extra specifications MUST allow the following additional syntax:
+
+* Multiple extras can be requested by separating them with a comma within
   the brackets.
-* All explicitly defined extras may be requested with the ``*`` wildcard
-  character. Note that this does NOT request the implicitly defined
-  ``test`` extra - that must always be requested explicitly when it is
-  desired.
-* Extras may be explicitly excluded by prefixing their name with a hyphen.
+
+* The following special extras request processing of the corresponding
+  lists of dependencies:
+
+  * ``:meta:``: ``meta_requires``
+  * ``:run:``: ``run_requires``
+  * ``:test:``: ``test_requires``
+  * ``:build:``: ``build_requires``
+  * ``:dev:``: ``dev_requires``
+  * ``:*:``: process *all* dependency lists
+
+* The ``*`` character as an extra is a wild card that enables all of the
+  entries defined in the distribution's ``extras`` field.
+
+* Extras may be explicitly excluded by prefixing their name with a ``-``
+  character (this is useful in conjunction with ``*`` to exclude only
+  particular extras that are definitely not wanted, while enabling all
+  others).
+
+* The ``-`` character as an extra specification indicates that the
+  distribution itself should NOT be installed, and also disables the
+  normally implied processing of ``:meta:`` and ``:run:`` dependencies
+  (those may still be requested explicitly using the appropriate extra
+  specifications).
 
 Command line based installation tools SHOULD support this same syntax to
 allow extras to be requested explicitly.
@@ -1432,15 +1837,32 @@
 The full set of dependency requirements is then based on the top level
 dependencies, along with those of any requested extras.
 
-Example::
+Dependency examples (showing just the ``requires`` subfield)::
 
     "requires": ["ComfyChair[warmup]"]
-        -> requires ``ComfyChair`` and ``SoftCushions`` at run time
+        -> requires ``ComfyChair`` and ``SoftCushions``
 
     "requires": ["ComfyChair[*]"]
-        -> requires ``ComfyChair`` and ``SoftCushions`` at run time, but
-           will also pick up any new optional dependencies other than those
-           needed solely to run the tests
+        -> requires ``ComfyChair`` and ``SoftCushions``, but will also
+           pick up any new extras defined in later versions
+
+Command line examples::
+
+    pip install ComfyChair
+        -> installs ComfyChair with applicable :meta: and :run: dependencies
+
+    pip install ComfyChair[*]
+        -> as above, but also installs all extra dependencies
+
+    pip install ComfyChair[-,:build:,*]
+        -> installs just the build dependencies with all extras
+
+    pip install ComfyChair[-,:build:,:run:,:meta:,:test:,*]
+        -> as above, but also installs dependencies needed to run the tests
+
+    pip install ComfyChair[-,:*:,*]
+        -> installs the full set of development dependencies, but avoids
+           installing ComfyChair itself
 
 
 Environment markers
@@ -1463,15 +1885,15 @@
 requires PyWin32 both at runtime and buildtime when using Windows::
 
     "name": "ComfyChair",
-    "may_require": [
+    "run_requires": [
       {
-        "dependencies": ["pywin32 (>1.0)"],
+        "requires": ["pywin32 (>1.0)"],
         "environment": "sys.platform == 'win32'"
       }
     ]
-    "build_may_require": [
+    "build_requires": [
       {
-        "dependencies": ["pywin32 (>1.0)"],
+        "requires": ["pywin32 (>1.0)"],
         "environment": "sys.platform == 'win32'"
       }
     ]
@@ -1484,7 +1906,8 @@
 The pseudo-grammar is ::
 
     MARKER: EXPR [(and|or) EXPR]*
-    EXPR: ("(" MARKER ")") | (SUBEXPR [(in|==|!=|not in) SUBEXPR])
+    EXPR: ("(" MARKER ")") | (SUBEXPR [CMPOP SUBEXPR])
+    CMPOP: (==|!=|<|>|<=|>=|in|not in)
 
 where ``SUBEXPR`` is either a Python string (such as ``'2.4'``, or
 ``'win32'``) or one of the following marker variables:
@@ -1493,29 +1916,42 @@
 * ``python_full_version``: see definition below
 * ``os_name````: ``os.name``
 * ``sys_platform````: ``sys.platform``
+* ``platform_release``: ``platform.release()``
 * ``platform_version``: ``platform.version()``
 * ``platform_machine``: ``platform.machine()``
 * ``platform_python_implementation``: ``platform.python_implementation()``
+* ``implementation_name````: ``sys.implementation.name``
+* ``implementation_version````: see definition below
+
+If a particular value is not available (such as the ``sys.implementation``
+subattributes in versions of Python prior to 3.3), the corresponding marker
+variable MUST be considered equivalent to the empty string.
 
 Note that all subexpressions are restricted to strings or one of the
-marker variable names, meaning that it is not possible to use other
-sequences like tuples or lists on the right side of the ``in`` and
-``not in`` operators.
-
-Unlike Python, chaining of comparison operations is NOT permitted in
-environment markers.
-
-The ``python_full_version`` marker variable is derived from
-``sys.version_info()`` in accordance with the following algorithm::
-
-    def format_full_version():
-        info = sys.version_info
+marker variable names (which refer to string values), meaning that it is
+not possible to use other sequences like tuples or lists on the right
+side of the ``in`` and ``not in`` operators.
+
+Chaining of comparison operations is permitted using the normal Python
+semantics of an implied ``and``.
+
+The ``python_full_version`` and ``implementation_version`` marker variables
+are derived from ``sys.version_info()`` and ``sys.implementation.version``
+respectively, in accordance with the following algorithm::
+
+    def format_full_version(info):
         version = '{0.major}.{0.minor}.{0.micro}'.format(info)
         kind = info.releaselevel
         if kind != 'final':
             version += kind[0] + str(info.serial)
         return version
 
+    python_full_version = format_full_version(sys.version_info)
+    implementation_version = format_full_version(sys.implementation.version)
+
+``python_full_version`` will typically correspond to the leading segment
+of ``sys.version()``.
+
 
 Updating the metadata specification
 ===================================
@@ -1523,13 +1959,80 @@
 The metadata specification may be updated with clarifications without
 requiring a new PEP or a change to the metadata version.
 
-Adding new features (other than through the extension mechanism), or
-changing the meaning of existing fields, requires a new metadata version
-defined in a new PEP.
-
-
-Summary of differences from \PEP 345
-====================================
+Changing the meaning of existing fields or adding new features (other than
+through the extension mechanism) requires a new metadata version defined in
+a new PEP.
+
+
+Appendix A: Conversion notes for legacy metadata
+================================================
+
+The reference implementations for converting from legacy metadata to
+metadata 2.0 are:
+
+* the `wheel project <https://bitbucket.org/dholth/wheel/overview>`__, which
+  adds the ``bdist_wheel`` command to ``setuptools``
+* the `Warehouse project <https://github.com/dstufft/warehouse>`__, which
+  will eventually be migrated to the Python Packaging Authority as the next
+  generation Python Package Index implementation
+* the `distlib project <https://bitbucket.org/pypa/distlib/>`__ which is
+  derived from the core packaging infrastructure created for the
+  ``distutils2`` project and
+
+While it is expected that there may be some edge cases where manual
+intervention is needed for clean conversion, the specification has been
+designed to allow fully automated conversion of almost all projects on
+PyPI.
+
+Metadata conversion (especially on the part of the index server) is a
+necessary step to allow installation and analysis tools to start
+benefiting from the new metadata format, without having to wait for
+developers to upgrade to newer build systems.
+
+
+Appendix B: Mapping dependency declarations to an RPM SPEC file
+===============================================================
+
+As an example of mapping this PEP to Linux distro packages, assume an
+example project without any extras defined is split into 2 RPMs
+in a SPEC file: ``example`` and ``example-devel``.
+
+The ``meta_requires`` and ``run_requires`` dependencies would be mapped
+to the Requires dependencies for the "example" RPM (a mapping from
+environment markers relevant to Linux to SPEC file conditions would
+also allow those to be handled correctly)
+
+The ``build_requires`` dependencies would be mapped to the BuildRequires
+dependencies for the "example" RPM.
+
+All defined dependencies relevant to Linux, including those in
+``dev_requires`` and ``test_requires`` would become Requires dependencies
+for the "example-devel" RPM.
+
+A documentation toolchain dependency like Sphinx would either go in
+``build_requires`` (for example, if man pages were included in the
+built distribution) or in ``dev_requires`` (for example, if the
+documentation is published solely through ReadTheDocs or the
+project website). This would be enough to allow an automated converter
+to map it to an appropriate dependency in the spec file.
+
+If the project did define any extras, those could be mapped to additional
+virtual RPMs with appropriate BuildRequires and Requires entries based on
+the details of the dependency specifications. Alternatively, they could
+be mapped to other system package manager features (such as package lists
+in ``yum``).
+
+Other system package managers may have other options for dealing with
+extras (Debian packagers, for example, would have the option to map them
+to "Recommended" or "Suggested" package entries).
+
+The metadata extension format should also allow distribution specific hints
+to be included in the upstream project metadata without needing to manually
+duplicate any of the upstream metadata in a distribution specific format.
+
+
+Appendix C: Summary of differences from \PEP 345
+=================================================
 
 * Metadata-Version is now 2.0, with semantics specified for handling
   version changes
@@ -1550,21 +2053,21 @@
 
 * Changed the version scheme to be based on PEP 440 rather than PEP 386
 
-* Added the build label mechanism as described in PEP 440
-
-* Support for different development, build, test and deployment dependencies
+* Added the source label mechanism as described in PEP 440
+
+* Support for different kinds of dependencies
 
 * The "Extras" optional dependency mechanism
 
 * A well-defined metadata extension mechanism
 
-* Metabuild hook system
+* Install hook system
 
 * Clarify and simplify various aspects of environment markers:
 
   * allow use of parentheses for grouping in the pseudo-grammar
   * consistently use underscores instead of periods in the variable names
-  * clarify that chained comparisons are not permitted
+  * allow ordered string comparisons and chained comparisons
 
 * More flexible system for defining contact points and contributors
 
@@ -1574,9 +2077,11 @@
 
 * Updated obsolescence mechanism
 
-* Added "License URL" field
-
-* Explicit declaration of description markup format
+* Identification of supporting documents in the ``dist-info`` directory:
+
+  * Allows markup formats to be indicated through file extensions
+  * Standardises the common practice of taking the description from README
+  * Also supports inclusion of license files and changelogs
 
 * With all due respect to Charles Schulz and Peanuts, many of the examples
   have been updated to be more `thematically appropriate`_ for Python ;)
@@ -1625,7 +2130,7 @@
 subfields.
 
 The old serialisation format also wasn't amenable to easy conversion to
-standard Python data structures for use in the new metabuild hook APIs, or
+standard Python data structures for use in the new install hook APIs, or
 in future extensions to the importer APIs to allow them to provide
 information for inclusion in the installation database.
 
@@ -1649,33 +2154,47 @@
 See PEP 440 for the rationale behind the addition of this field.
 
 
-Development, build and deployment dependencies
-----------------------------------------------
-
-The separation of the ``requires``, ``build_requires`` and ``dev_requires``
-fields allow a distribution to indicate whether a dependency is needed
-specifically to develop, build or deploy the distribution.
-
-As distribution metadata improves, this should allow much greater control
-over where particular dependencies end up being installed .
+Support for different kinds of dependencies
+-------------------------------------------
+
+The separation of the five different kinds of dependency allows a
+distribution to indicate whether a dependency is needed specifically to
+develop, build, test or use the distribution.
+
+To allow for metadistributions like PyObjC, while still actively
+discouraging overly strict dependency specifications, the separate
+``meta`` dependency fields are used to separate out those dependencies
+where exact version specifications are appropriate.
+
+The advantage of having these distinctions supported in the upstream Python
+specific metadata is that even if a project doesn't care about these
+distinction themselves, they may be more amenable to patches from
+downstream redistributors that separate the fields appropriately. Over time,
+this should allow much greater control over where and when particular
+dependencies end up being installed.
+
+The names for the dependency fields have been deliberately chosen to avoid
+conflicting with the existing terminology in setuptools and previous
+versions of the metadata standard. Specifically, the names ``requires``,
+``install_requires`` and ``setup_requires`` are not used, which will
+hopefully reduce confustion when converting legacy metadata to the new
+standard.
 
 
 Support for optional dependencies for distributions
 ---------------------------------------------------
 
 The new extras system allows distributions to declare optional
-features, and to use the ``may_require`` and ``build_may_require`` fields
-to indicate when particular dependencies are needed only to support those
-features. It is derived from the equivalent system that is already in
-widespread use as part of ``setuptools`` and allows that aspect of the
-legacy ``setuptools`` metadata to be accurately represented in the new
-metadata format.
-
-The ``test`` extra is implicitly defined for all distributions, as it
-ties in with the new metabuild hook offering a standard way to request
-execution of a distribution's test suite. Identifying test suite
-dependencies is already one of the most popular uses of the extras system
-in ``setuptools``.
+behaviour, and to use the dependency fields to indicate when
+particular dependencies are needed only to support that behaviour. It is
+derived from the equivalent system that is already in widespread use as
+part of ``setuptools`` and allows that aspect of the legacy ``setuptools``
+metadata to be accurately represented in the new metadata format.
+
+The additions to the extras syntax relative to setuptools are defined to
+make it easier to express the various possible combinations of dependencies,
+in particular those associated with build systems (with optional support
+for running the test suite) and development systems.
 
 
 Support for metadata extensions
@@ -1691,36 +2210,70 @@
 the chosen extension, and the new extras mechanism, allowing support for
 particular extensions to be provided as optional features.
 
-
-Support for metabuild hooks
+Possible future uses for extensions include declaration of plugins for
+other distributions, hints for automatic conversion to Linux system
+packages, and inclusion of CVE references to mark security releases.
+
+
+Support for install hooks
 ---------------------------
 
-The new metabuild system is designed to allow the wheel format to fully
-replace direct installation on deployment targets, by allows projects like
-Twisted to still execute code following installation from a wheel file.
-
-Falling back to invoking ``setup.py`` directly rather than using a
-metabuild hook will remain an option when relying on version 1.x metadata,
-and is also used as the interim solution for installation from source
-archives.
-
-The ``test_installed_dist`` metabuild hook is included as a complement to
-the ability to explicitly specify test dependencies.
+The new install hook system is designed to allow the wheel format to fully
+replace direct installation on deployment targets, by allowing projects to
+explicitly define code that should be executed following installation from
+a wheel file.
+
+This may be something relatively simple, like the `two line
+refresh <https://twistedmatrix.com/documents/current/core/howto/plugin.html#auto3>`__
+of the Twisted plugin caches that the Twisted developers recommend for
+any project that provides Twisted plugins, to more complex platform
+dependent behaviour, potentially in conjunction with appropriate
+metadata extensions and ``supports_environments`` entries.
+
+For example, upstream declaration of external dependencies for various
+Linux distributions in a distribution neutral format may be supported by
+defining an appropriate metadata extension that is read by a postinstall
+hook and converted into an appropriate invocation of the system package
+manager. Other operations (such as registering COM DLLs on Windows,
+registering services for automatic startup on any platform, or altering
+firewall settings) may need to be undertaken with elevated privileges,
+meaning they cannot be deferred to implicit execution on first use of the
+distribution.
+
+The install hook and metadata extension systems allow support for such
+activities to be pursued independently by the individual platform
+communities, while still interoperating with the cross-platform Python
+tools.
+
+Legacy packages that expect to able to run code on target systems using
+``setup.py install`` will no longer work correctly. Such packages will
+already break when pip 1.4+ is configured to use a wheel cache directory.
 
 
 Changes to environment markers
 ------------------------------
 
-The changes to environment markers were just clarifications and
+There are three substantive changes to environment markers in this version:
+
+* ``platform_release`` was added, as it provides more useful information
+  than ``platform_version`` on at least Linux and Mac OS X (specifically,
+  it provides details of the running kernel version)
+* ordered comparison of strings is allowed, as this is more useful for
+  setting minimum and maximum versions where conditional dependencies
+  are needed or where a platform is supported
+* comparison chaining is explicitly allowed, as this becomes useful in the
+  presence of ordered comparisons
+
+The other changes to environment markers are just clarifications and
 simplifications to make them easier to use.
 
 The arbitrariness of the choice of ``.`` and ``_`` in the different
-variables was addressed by standardising on ``_`` (as these are predefined
-variables rather than live references into the Python module namespace)
-
-The use of parentheses for grouping and the disallowance of chained
-comparisons were added to address some underspecified behaviour in the
-previous version of the specification.
+variables was addressed by standardising on ``_`` (as these are all
+predefined variables rather than live references into the Python module
+namespace)
+
+The use of parentheses for grouping was explicitly noted to address some
+underspecified behaviour in the previous version of the specification.
 
 
 Updated contact information
@@ -1751,8 +2304,9 @@
 has been used to replace several older fields with poorly defined semantics.
 
 For the moment, the old ``Requires-External`` field has been removed
-entirely. Possible replacements may be explored through the metadata
-extension mechanism.
+entirely. The combination of explicit support for post install hooks and the
+metadata extension mechanism will hopefully prove to be a more useful
+replacement.
 
 
 Updated obsolescence mechanism
@@ -1770,22 +2324,55 @@
 is not widely supported, and so removing it does not present any significant
 barrier to tools and projects adopting the new metadata format.
 
-Explicit markup for description
--------------------------------
-
-Currently, PyPI attempts to detect the markup format by rendering it as
-reStructuredText, and if that fails, treating it as plain text. Allowing
-the intended format to be stated explicitly will allow this guessing to be
-removed, and more informative error reports to be provided to users when
-a rendering error occurs.
-
-This is especially necessary since PyPI applies additional restrictions to
+
+Included text documents
+-----------------------
+
+Currently, PyPI attempts to determine the description's markup format by
+rendering it as reStructuredText, and if that fails, treating it as plain
+text.
+
+Furthermore, many projects simply read their long description in from an
+existing README file in ``setup.py``. The popularity of this practice is
+only expected to increase, as many online version control systems
+(including both GitHub and BitBucket) automatically display such files
+on the landing page for the project.
+
+Standardising on the inclusion of the long description as a separate
+file in the ``dist-info`` directory allows this to be simplified:
+
+* An existing file can just be copied into the ``dist-info`` directory as
+  part of creating the sdist
+* The expected markup format can be determined by inspecting the file
+  extension of the specified path
+
+Allowing the intended format to be stated explicitly in the path allows
+the format guessing to be removed and more informative error reports to be
+provided to users when a rendering error occurs.
+
+This is especially helpful since PyPI applies additional restrictions to
 the rendering process for security reasons, thus a description that renders
 correctly on a developer's system may still fail to render on the server.
 
-
-Deferred features
-=================
+The document naming system used to achieve this then makes it relatively
+straightforward to allow declaration of alternative markup formats like
+HTML, Markdown and AsciiDoc through the use of appropriate file
+extensions, as well as to define similar included documents for the
+project's license and changelog.
+
+Grouping the included document names into a single top level field gives
+automated tools the option of treating them as arbitrary documents without
+worrying about their contents.
+
+Requiring that the included documents be added to the ``dist-info`` metadata
+directory means that the complete metadata for the distribution can be
+extracted from an sdist or binary archive simply by extracting that
+directory, without needing to check for references to other files in the
+sdist.
+
+
+Appendix D: Deferred features
+=============================
 
 Several potentially useful features have been deliberately deferred in
 order to better prioritise our efforts in migrating to the new metadata
@@ -1793,15 +2380,25 @@
 new metadata, but which can be readily added in metadata 2.1 without
 breaking any use cases already supported by metadata 2.0.
 
-Once the ``pypi``, ``setuptools``, ``pip`` and ``distlib`` projects
-support creation and consumption of metadata 2.0, then we may revisit
-the creation of metadata 2.1 with these additional features.
-
-.. note::
-
-   Given the nature of this PEP as an interoperability specification,
-   this section will probably be removed before the PEP is accepted.
-   However, it's useful to have it here while discussion is ongoing.
+Once the ``pypi``, ``setuptools``, ``pip``, ``wheel`` and ``distlib``
+projects support creation and consumption of metadata 2.0, then we may
+revisit the creation of metadata 2.1 with some or all of these additional
+features.
+
+
+MIME type registration
+----------------------
+
+At some point after acceptance of the PEP, I will likely submit the
+following MIME type registration requests to IANA:
+
+* Full metadata: ``application/vnd.python.pydist+json``
+* Essential dependency resolution metadata:
+  ``application/vnd.python.pydist-dependencies+json``
+
+It's even possible we may be able to just register the ``vnd.python``
+namespace under the banner of the PSF rather than having to register
+the individual subformats.
 
 
 String methods in environment markers
@@ -1816,62 +2413,82 @@
 than a little strange.
 
 
-Module listing
---------------
-
-A top level ``"module"`` key, referencing a list of strings, with each
-giving the fully qualified name of a public package or module provided
-by the distribution.
-
-A flat list would be used in order to correctly accommodate namespace
-packages (where a distribution may provide subpackages or submodules without
-explicitly providing the parent namespace package).
-
-Example::
-
-    "modules": [
-        "comfy.chair"
-    ]
+Module and file listings
+------------------------
+
+Derived metadata giving the modules and files included in built
+distributions may be useful at some point in the future. (At least RPM
+provides this, and I believe the APT equivalent does as well)
 
 Explicitly providing a list of public module names will likely help
 with enabling features in RPM like "Requires: python(requests)", as well
 as providing richer static metadata for analysis from PyPI.
 
-However, this is just extra info that doesn't impact installing from wheels,
-so it is a good candidate for postponing to metadata 2.1.
-
-
-Additional metabuild hooks
---------------------------
-
-The following draft metabuild operations have been deferred for now:
+However, this is just extra info that doesn't impact reliably installing
+from wheels, so it is a good candidate for postponing to metadata 2.1
+(at the earliest).
+
+
+Additional install hooks
+------------------------
+
+In addition to the postinstall and preuninstall hooks described in the PEP,
+other distribution systems (like RPM) include the notion of preinstall
+and postuninstall hooks. These hooks would run with the runtime dependencies
+installed, but without the distribution itself. These have been deliberately
+omitted, as they're well suited to being explored further as metadata
+extensions.
+
+Similarly, the idea of "optional" postinstall and preuninstall hooks can
+be pursued as a metadata extension.
+
+By contrast, the mandatory postinstall and preuninstall hooks have been
+included directly in the PEP, specifically to ensure installation tools
+don't silently ignore them. This ensures users will either be able to
+install such distributions, or else receive an explicit error at installation
+time.
+
+
+Metabuild system
+----------------
+
+This version of the metadata specification continues to use ``setup.py``
+and the distutils command syntax to invoke build and test related
+operations on a source archive or VCS checkout.
+
+It may be desirable to replace these in the future with tool independent
+entry points that support:
 
 * Generating the metadata file on a development system
-* Generating a source archive on a development system
+* Generating an sdist on a development system
 * Generating a binary archive on a build system
+* Running the test suite on a built (but not installed) distribution
 
 Metadata 2.0 deliberately focuses on wheel based installation, leaving
-tarball and sdist based installation to use the existing ``setup.py``
-based ``distutils`` command interface.
-
-In the meantime, the above four operations will continue to be handled
-through the ``distutils``/``setuptools`` command system:
+sdist, source archive, and VCS checkout based installation to use the
+existing ``setup.py`` based ``distutils`` command interface.
+
+In the meantime, the above operations will be handled through the
+``distutils``/``setuptools`` command system:
 
 * ``python setup.py dist_info``
 * ``python setup.py sdist``
+* ``python setup.py build_ext --inplace``
+* ``python setup.py test``
 * ``python setup.py bdist_wheel``
 
-The following additional metabuild hooks may be added in metadata 2.1 to
+The following metabuild hooks may be defined in metadata 2.1 to
 cover these operations without relying on ``setup.py``:
 
-* ``make_dist_info``: generate the source archive's dist_info directory
-* ``make_sdist``: construct a source archive
-* ``build_wheel``: construct a binary wheel archive from an sdist source
-  archive
-
-Tentative signatures have been designed for those hooks, but they will
-not be pursued further until 2.1 (note that the current signatures for
-the hooks do *not* adequately handle the "extras" concept)::
+* ``make_dist_info``: generate the sdist's dist_info directory
+* ``make_sdist``: create the contents of an sdist
+* ``build_dist``: create the contents of a binary wheel archive from an
+  unpacked sdist
+* ``test_built_dist``: run the test suite for a built distribution
+
+Tentative signatures have been designed for those hooks, but in order to
+better focus initial development efforts on the integration and installation
+use cases, they will not be pursued further until metadata 2.1::
 
     def make_dist_info(source_dir, info_dir):
         """Generate the contents of dist_info for an sdist archive
@@ -1896,11 +2513,11 @@
         Returns the distribution metadata as a dictionary.
         """
 
-    def build_wheel(sdist_dir, contents_dir, info_dir, compatibility=None):
-        """Generate the contents of a wheel archive
-
-        *source_dir* points to an unpacked source archive
-        *contents_dir* is the destination where the wheel contents should be
+    def build_dist(sdist_dir, built_dir, info_dir, compatibility=None):
+        """Generate the contents of a binary wheel archive
+
+        *sdist_dir* points to an unpacked sdist
+        *built_dir* is the destination where the wheel contents should be
         written (note that archiving the contents is the responsibility of
         the metabuild tool rather than the hook function)
         *info_dir* is the destination where the wheel metadata files should
@@ -1912,31 +2529,102 @@
         Returns the actual compatibility tag for the build
         """
 
-
-Rejected Features
-=================
+    def test_built_dist(sdist_dir, built_dir, info_dir):
+        """Check a built (but not installed) distribution works as expected
+
+        *sdist_dir* points to an unpacked sdist
+        *built_dir* points to a platform appropriate unpacked wheel archive
+        (which may be missing the wheel metadata directory)
+        *info_dir* points to the appropriate wheel metadata directory
+
+        Requires that the distribution's test dependencies be installed
+        (indicated by the ``:test:`` extra).
+
+        Returns ``True`` if the check passes, ``False`` otherwise.
+        """
+
+As with the existing install hooks, checking for extras would be done
+using the same import based checks as are used for runtime extras. That
+way it doesn't matter if the additional dependencies were requested
+explicitly or just happen to be available on the system.
+
+There are still a number of open questions with this design, such as whether
+a single build hook is sufficient to cover both "build for testing" and
+"prep for deployment", as well as various complexities like support for
+cross-compilation of binaries, specification of target platforms and
+Python versions when creating wheel files, etc.
+
+Opting to retain the status quo for now allows us to make progress on
+improved metadata publication and binary installation support, rather than
+having to delay that awaiting the creation of a viable metabuild framework.
+
+
+Appendix E: Rejected features
+=============================
 
 The following features have been explicitly considered and rejected as
 introducing too much additional complexity for too small a gain in
 expressiveness.
 
-.. note::
-
-   Given the nature of this PEP as an interoperability specification,
-   this section will probably be removed before the PEP is accepted.
-   However, it's useful to have it here while discussion is ongoing.
-
-
-Detached metadata
------------------
-
-Rather than allowing some large items (such as the description field) to
-be distributed separately, this PEP instead defines two metadata subsets
-that should support more reasonable caching and API designs (for example,
-only the essential dependency resolution metadata would be distributed
-through TUF, and it is entirely possible the updated sdist, wheel and
-installation database specs will use the abbreviated metadata, leaving
-the full metadata as the province of index servers).
+
+Separate lists for conditional and unconditional dependencies
+-------------------------------------------------------------
+
+Earlier versions of this PEP used separate lists for conditional and
+unconditional dependencies. This turned out to be annoying to handle in
+automated tools and removing it also made the PEP and metadata schema
+substantially shorter, suggesting it was actually harder to explain as well.
+
+
+Disallowing underscores in distribution names
+---------------------------------------------
+
+Debian doesn't actually permit underscores in names, but that seems
+unduly restrictive for this spec given the common practice of using
+valid Python identifiers as Python distribution names. A Debian side
+policy of converting underscores to hyphens seems easy enough to
+implement (and the requirement to consider hyphens and underscores as
+equivalent ensures that doing so won't introduce any conflicts).
+
+
+Allowing the use of Unicode in distribution names
+-------------------------------------------------
+
+This PEP deliberately avoids following Python 3 down the path of arbitrary
+Unicode identifiers, as the security implications of doing so are
+substantially worse in the software distribution use case (it opens
+up far more interesting attack vectors than mere code obfuscation).
+
+In addition, the existing tools really only work properly if you restrict
+names to ASCII and changing that would require a *lot* of work for all
+the automated tools in the chain.
+
+It may be reasonable to revisit this question at some point in the (distant)
+future, but setting up a more reliable software distribution system is
+challenging enough without adding more general Unicode identifier support
+into the mix.
+
+
+Single list for conditional and unconditional dependencies
+----------------------------------------------------------
+
+It's technically possible to store the conditional and unconditional
+dependencies of each kind in a single list and switch the handling based on
+the entry type (string or mapping).
+
+However, the current ``*requires`` vs ``*may-require`` two list design seems
+easier to understand and work with, since it's only the conditional
+dependencies that need to be checked against the requested extras list and
+the target installation environment.
+
+
+Depending on source labels
+--------------------------
+
+There is no mechanism to express a dependency on a source label - they
+are included in the metadata for internal project reference only. Instead,
+dependencies must be expressed in terms of either public versions or else
+direct URL references.
 
 
 Alternative dependencies
@@ -1961,7 +2649,7 @@
 database driver" metadata extension where a project depends on SQL Alchemy,
 and then declares in the extension which database drivers are checked for
 compatibility by the upstream project (similar to the advisory
-``supports-platform`` field in the main metadata).
+``supports_environments`` field in the main metadata).
 
 We're also getting better support for "virtual provides" in this version of
 the metadata standard, so this may end up being an installer and index
@@ -1989,9 +2677,67 @@
 
 Under the revised metadata design, conditional "provides" based on runtime
 features or the environment would go in a separate "may_provide" field.
-However, I'm not convinced there's a great use case for that, so the idea
+However, it isn't clear there's any use case for doing that, so the idea
 is rejected unless someone can present a compelling use case (and even then
-the idea wouldn't be reconsidered until metadata 2.1 at the earliest).
+the idea won't be reconsidered until metadata 2.1 at the earliest).
+
+
+A hook to run tests against installed distributions
+---------------------------------------------------
+
+Earlier drafts of this PEP defined a hook for running automated
+tests against an *installed* distribution. This isn't actually what you
+generally want - you want the ability to test a *built* distribution,
+potentially relying on files which won't be included in the binary archives.
+
+RPM's "check" step also runs between the build step and the install step,
+rather than after the install step.
+
+Accordingly, the ``test_installed_dist`` hook has been removed, and the
+``test_built_dist`` metabuild hook has been tentatively defined. However,
+along with the rest of the metabuild hooks, further consideration has been
+deferred until metadata 2.1 at the earliest.
+
+
+Extensible signatures for the install hooks
+-------------------------------------------
+
+The install hooks have been deliberately designed to NOT accept arbitary
+keyword arguments that the hook implementation is then expected to ignore.
+
+The argument in favour of that API design technique is to allow the addition
+of new optional arguments in the future, without requiring the definition
+of a new install hook, or migration to version 3.0 of the metadata
+specification. It is a technique very commonly seen in function wrappers
+which merely pass arguments along to the inner function rather than
+processing them directly.
+
+However, the install hooks are already designed to have access to the full
+metadata for the distribution (including all metadata extensions and
+the previous/next version when appropriate), as well as to the full target
+deployment environment.
+
+This means there are two candidates for additional information that
+could be passed as arbitrary keyword arguments:
+
+* installer dependent settings
+* user provided installation options
+
+The first of those runs explicitly counter to one of the core goals of the
+metadata 2.0 specification: decoupling the software developer's choice of
+development and publication tools from the software integrator's choice of
+integration and deployment tools.
+
+The second is a complex problem that has a readily available workaround in
+the form of operating system level environment variables (this is also
+one way to interoperate with platform specific installation tools).
+
+Alternatively, installer developers may either implicitly inject an
+additional metadata extension when invoking the install hook, or else
+define an alternate hook signature as a distinct metadata extension to be
+provided by the distribution. Either of these approaches makes the
+reliance on installer-dependent behaviour suitably explicit in either
+the install hook implementation or the distribution metadata.
 
 
 References
diff --git a/pep-0426/pydist-schema.json b/pep-0426/pydist-schema.json
new file mode 100644
--- /dev/null
+++ b/pep-0426/pydist-schema.json
@@ -0,0 +1,329 @@
+{
+  "id": "http://www.python.org/dev/peps/pep-0426/",
+  "$schema": "http://json-schema.org/draft-04/schema#",
+  "title": "Metadata for Python Software Packages 2.0",
+  "type": "object",
+  "properties": {
+    "metadata_version": {
+      "description": "Version of the file format",
+      "type": "string",
+      "pattern": "^(\\d+(\\.\\d+)*)$"
+    },
+    "generator": {
+      "description": "Name and version of the program that produced this file.",
+      "type": "string",
+      "pattern": "^[0-9A-Za-z]([0-9A-Za-z_.-]*[0-9A-Za-z])( \\(.*\\))?$"
+    },
+    "name": {
+      "description": "The name of the distribution.",
+      "type": "string",
+      "$ref": "#/definitions/distribution_name"
+    },
+    "version": {
+      "description": "The distribution's public version identifier",
+      "type": "string",
+      "pattern": "^(\\d+(\\.\\d+)*)((a|b|c|rc)(\\d+))?(\\.(post)(\\d+))?(\\.(dev)(\\d+))?$"
+    },
+    "source_label": {
+      "description": "A constrained identifying text string",
+      "type": "string",
+      "pattern": "^[0-9a-z_.-+]+$"
+    },
+    "source_url": {
+      "description": "A string containing a full URL where the source for this specific version of the distribution can be downloaded.",
+      "type": "string",
+      "format": "uri"
+    },
+    "summary": {
+      "description": "A one-line summary of what the distribution does.",
+      "type": "string"
+    },
+    "document_names": {
+      "description": "Names of supporting metadata documents",
+      "type": "object",
+      "properties": {
+        "description": {
+          "type": "string",
+          "$ref": "#/definitions/document_name"
+        },
+        "changelog": {
+          "type": "string",
+          "$ref": "#/definitions/document_name"
+        },
+        "license": {
+          "type": "string",
+          "$ref": "#/definitions/document_name"
+        }
+      },
+      "additionalProperties": false
+    },
+    "keywords": {
+      "description": "A list of additional keywords to be used to assist searching for the distribution in a larger catalog.",
+      "type": "array",
+      "items": {
+        "type": "string"
+      }
+    },
+    "license": {
+      "description": "A string indicating the license covering the distribution.",
+      "type": "string"
+    },
+    "classifiers": {
+      "description": "A list of strings, with each giving a single classification value for the distribution.",
+      "type": "array",
+      "items": {
+        "type": "string"
+      }
+    },
+    "contacts": {
+      "description": "A list of contributor entries giving the recommended contact points for getting more information about the project.",
+      "type": "array",
+      "items": {
+        "type": "object",
+        "$ref": "#/definitions/contact"
+      }
+    },
+    "contributors": {
+      "description": "A list of contributor entries for other contributors not already listed as current project points of contact.",
+      "type": "array",
+      "items": {
+        "type": "object",
+        "$ref": "#/definitions/contact"
+      }
+    },
+    "project_urls": {
+      "description": "A mapping of arbitrary text labels to additional URLs relevant to the project.",
+      "type": "object"
+    },
+    "extras": {
+      "description": "A list of optional sets of dependencies that may be used to define conditional dependencies in \"may_require\" and similar fields.",
+      "type": "array",
+      "items": {
+        "type": "string",
+        "$ref": "#/definitions/extra_name"
+      }
+    },
+    "meta_requires": {
+      "description": "A list of subdistributions made available through this metadistribution.",
+      "type": "array",
+      "$ref": "#/definitions/dependencies"
+    },
+    "run_requires": {
+      "description": "A list of other distributions needed to run this distribution.",
+      "type": "array",
+      "$ref": "#/definitions/dependencies"
+    },
+    "test_requires": {
+      "description": "A list of other distributions needed when this distribution is tested.",
+      "type": "array",
+      "$ref": "#/definitions/dependencies"
+    },
+    "build_requires": {
+      "description": "A list of other distributions needed when this distribution is built.",
+      "type": "array",
+      "$ref": "#/definitions/dependencies"
+    },
+    "dev_requires": {
+      "description": "A list of other distributions needed when this distribution is developed.",
+      "type": "array",
+      "$ref": "#/definitions/dependencies"
+    },
+    "provides": {
+      "description": "A list of strings naming additional dependency requirements that are satisfied by installing this distribution. These strings must be of the form Name or Name (Version)",
+      "type": "array",
+      "items": {
+        "type": "string",
+        "$ref": "#/definitions/provides_declaration"
+      }
+    },
+    "modules": {
+      "description": "A list of modules and/or packages available for import after installing this distribution.",
+      "type": "array",
+      "items": {
+        "type": "string",
+        "$ref": "#/definitions/qualified_name"
+      }
+    },
+    "namespaces": {
+      "description": "A list of namespace packages this distribution contributes to",
+      "type": "array",
+      "items": {
+        "type": "string",
+        "$ref": "#/definitions/qualified_name"
+      }
+    },
+    "commands": {
+      "description": "Command line interfaces provided by this distribution",
+      "type": "object",
+      "$ref": "#/definitions/commands"
+    },
+    "exports": {
+      "description": "Other exported interfaces provided by this distribution",
+      "type": "object",
+      "$ref": "#/definitions/exports"
+    },
+    "obsoleted_by": {
+      "description": "A string that indicates that this project is no longer being developed. The named project provides a substitute or replacement.",
+      "type": "string",
+      "$ref": "#/definitions/requirement"
+    },
+    "supports_environments": {
+      "description": "A list of strings specifying the environments that the distribution explicitly supports.",
+      "type": "array",
+      "items": {
+        "type": "string",
+        "$ref": "#/definitions/environment_marker"
+      }
+    },
+    "install_hooks": {
+      "description": "The install_hooks field is used to define various operations that may be invoked on a distribution in a platform independent manner.",
+      "type": "object",
+      "properties": {
+        "postinstall": {
+          "type": "string",
+          "$ref": "#/definitions/export_specifier"
+        },
+        "preuninstall": {
+          "type": "string",
+          "$ref": "#/definitions/export_specifier"
+        }
+      }
+    },
+    "extensions": {
+      "description": "Extensions to the metadata may be present in a mapping under the 'extensions' key.",
+      "type": "object"
+    }
+  },
+
+  "required": ["metadata_version", "name", "version", "summary"],
+  "additionalProperties": false,
+
+  "definitions": {
+    "contact": {
+      "type": "object",
+      "properties": {
+        "name": {
+          "type": "string"
+        },
+        "email": {
+          "type": "string"
+        },
+        "url": {
+          "type": "string"
+        },
+        "role": {
+          "type": "string"
+        }
+      },
+      "required": ["name"],
+      "additionalProperties": false
+    },
+    "dependencies": {
+      "type": "array",
+      "items": {
+        "type": "object",
+        "$ref": "#/definitions/dependency"
+      }
+    },
+    "dependency": {
+      "type": "object",
+      "properties": {
+        "extra": {
+          "type": "string",
+          "$ref": "#/definitions/valid_name"
+        },
+        "environment": {
+          "type": "string",
+          "$ref": "#/definitions/environment_marker"
+        },
+        "requires": {
+          "type": "array",
+          "items": {
+            "type": "string",
+            "$ref": "#/definitions/requirement"
+          }
+        }
+      },
+      "required": ["requires"],
+      "additionalProperties": false
+    },
+    "commands": {
+      "type": "object",
+      "properties": {
+        "wrap_console": {
+          "type": "object",
+          "$ref": "#/definitions/command_map"
+        },
+        "wrap_gui": {
+          "type": "object",
+          "$ref": "#/definitions/command_map"
+        },
+        "prebuilt": {
+          "type": "array",
+          "items": {
+            "type": "string",
+            "$ref": "#/definitions/relative_path"
+          }
+        }
+      },
+      "additionalProperties": false
+    },
+    "exports": {
+      "type": "object",
+      "patternProperties": {
+        "^[A-Za-z]([0-9A-Za-z_])*([.][A-Za-z]([0-9A-Za-z_])*)*$": {
+          "type": "object",
+          "patternProperties": {
+            ".": {
+              "type": "string",
+              "$ref": "#/definitions/export_specifier"
+            }
+          },
+          "additionalProperties": false
+        }
+      },
+      "additionalProperties": false
+    },
+    "command_map": {
+      "type": "object",
+      "patternProperties": {
+        "^[0-9A-Za-z]([0-9A-Za-z_.-]*[0-9A-Za-z])?$": {
+          "type": "string",
+          "$ref": "#/definitions/export_specifier"
+        }
+      },
+      "additionalProperties": false
+    },
+    "distribution_name": {
+        "type": "string",
+        "pattern": "^[0-9A-Za-z]([0-9A-Za-z_.-]*[0-9A-Za-z])?$"
+    },
+    "requirement": {
+        "type": "string"
+    },
+    "provides_declaration": {
+        "type": "string"
+    },
+    "environment_marker": {
+        "type": "string"
+    },
+    "document_name": {
+        "type": "string"
+    },
+    "extra_name" : {
+        "type": "string",
+        "pattern": "^[0-9A-Za-z]([0-9A-Za-z_.-]*[0-9A-Za-z])?$"
+    },
+    "relative_path" : {
+        "type": "string"
+    },
+    "export_specifier": {
+      "type": "string",
+      "pattern": "^([A-Za-z_][A-Za-z_0-9]*([.][A-Za-z_][A-Za-z_0-9]*)*)(:[A-Za-z_][A-Za-z_0-9]*([.][A-Za-z_][A-Za-z_0-9]*)*)?(\\[[0-9A-Za-z]([0-9A-Za-z_.-]*[0-9A-Za-z])?\\])?$"
+    },
+    "qualified_name" : {
+        "type": "string",
+        "pattern": "^[A-Za-z_][A-Za-z_0-9]*([.][A-Za-z_][A-Za-z_0-9]*)*$"
+    }
+  }
+}
diff --git a/pep-0432.txt b/pep-0432.txt
--- a/pep-0432.txt
+++ b/pep-0432.txt
@@ -3,11 +3,11 @@
 Version: $Revision$
 Last-Modified: $Date$
 Author: Nick Coghlan <ncoghlan at gmail.com>
-Status: Draft
+Status: Deferred
 Type: Standards Track
 Content-Type: text/x-rst
 Created: 28-Dec-2012
-Python-Version: 3.4
+Python-Version: 3.5
 Post-History: 28-Dec-2012, 2-Jan-2013
 
 
@@ -25,6 +25,31 @@
 implementation is developed.
 
 
+PEP Deferral
+============
+
+Python 3.4 is nearing its first alpha, and already includes a couple of
+significant low level changes in PEP 445 (memory allocator customisation)
+and PEP 442 (safe object finalization). As a result of the latter PEP,
+the shutdown procedure of CPython has also been changed to be more heavily
+reliant on the cyclic garbage collector, significantly reducing the
+number of modules that will experience the "module globals set to None"
+behaviour that is used to deliberate break cycles and attempt to releases
+more external resources cleanly.
+
+Furthermore, I am heavily involved in the current round of updates to the
+Python packaging ecosystem (as both the lead author of PEP 426 and
+BDFL-delegate for several other PEPs), leaving little to spare to work on
+this proposal. The other developers I would trust to lead this effort are
+also working on other things.
+
+So, due to those practical resource constraints, the proximity of Python
+3.4 deadlines, and recognition that making too many significant changes to
+the low level CPython infrastructure in one release is likely to be unwise,
+further work on this PEP has been deferred to the Python 3.5 development
+cycle.
+
+
 Proposal
 ========
 
diff --git a/pep-0435.txt b/pep-0435.txt
--- a/pep-0435.txt
+++ b/pep-0435.txt
@@ -5,7 +5,7 @@
 Author: Barry Warsaw <barry at python.org>,
         Eli Bendersky <eliben at gmail.com>,
         Ethan Furman <ethan at stoneleaf.us>
-Status: Accepted
+Status: Final
 Type: Standards Track
 Content-Type: text/x-rst
 Created: 2013-02-23
@@ -467,6 +467,10 @@
     ...   cat = 3
     ...   dog = 4
 
+The reason for defaulting to ``1`` as the starting number and not ``0`` is
+that ``0`` is ``False`` in a boolean sense, but enum members all evaluate
+to ``True``.
+
 
 Proposed variations
 ===================
diff --git a/pep-0439.txt b/pep-0439.txt
--- a/pep-0439.txt
+++ b/pep-0439.txt
@@ -45,6 +45,12 @@
 considerably reduced.  It is hoped that this will therefore increase
 the likelihood that Python projects will reuse third party software.
 
+The Python community also has an issue of complexity around the current
+bootstrap procedure for pip and setuptools. They all have
+their own bootstrap download file with slightly different usages and
+even refer to each other in some cases. Having a single bootstrap which
+is common amongst them all, with a simple usage, would be far preferable.
+
 It is also hoped that this is reduces the number of proposals to
 include more and more software in the Python standard library, and
 therefore that more popular Python software is more easily upgradeable
@@ -54,23 +60,32 @@
 Proposal
 ========
 
-This proposal affects three components of packaging: `the pip bootstrap`_,
-`setuptools`_ and, thanks to easier package installation, `modifications to
-publishing packages`_.
+The bootstrap will install the pip implementation, setuptools by downloading
+their installation files from PyPI.
+
+This proposal affects two components of packaging: `the pip bootstrap`_ and,
+thanks to easier package installation, `modifications to publishing
+packages`_.
+
+The core of this proposal is that the user experience of using pip should not
+require the user to install pip.
 
 
 The pip bootstrap
 -----------------
 
 The Python installation includes an executable called "pip3" (see PEP 394 for
-naming rationale etc.) that attempts to import pip machinery.  If it can
-then the pip command proceeds as normal.  If it cannot it will bootstrap pip by
-downloading the pip implementation wheel file.  Once installed, the pip command
-proceeds as normal.
+naming rationale etc.) that attempts to import pip machinery.  If it can then
+the pip command proceeds as normal.  If it cannot it will bootstrap pip by
+downloading the pip implementation and setuptools wheel files. Hereafter the
+installation of the "pip implementation" will imply installation of setuptools
+and virtualenv.  Once installed, the pip command proceeds as normal. Once the
+bootstrap process is complete the "pip3" command is no longer the bootstrap
+but rather the full pip command.
 
-A boostrap is used in the place of a the full pip code so that we
-don't have to bundle pip and also the install tool is upgradeable
-outside of the regular Python upgrade timeframe and processes.
+A boostrap is used in the place of a the full pip code so that we don't have
+to bundle pip and also pip is upgradeable outside of the regular Python
+upgrade timeframe and processes.
 
 To avoid issues with sudo we will have the bootstrap default to
 installing the pip implementation to the per-user site-packages
@@ -88,82 +103,58 @@
 2. The user will invoke a pip command, typically "pip3 install
    <package>", for example "pip3 install Django".
 3. The boostrap script will attempt to import the pip implementation.
-   If this succeeds, the pip command is processed normally.
+   If this succeeds, the pip command is processed normally. Stop.
 4. On failing to import the pip implementation the bootstrap notifies
-   the user that it is "upgrading pip" and contacts PyPI to obtain the
-   latest download wheel file (see PEP 427.)
-5. Upon downloading the file it is installed using the distlib
-   installation machinery for wheel packages.  Upon completing the
-   installation the user is notified that "pip3 has been upgraded."
-   TODO how is it verified?
-6. The pip tool may now import the pip implementation and continues to
+   the user that it needs to "install pip". It will ask the user whether it
+   should install pip as a system-wide site-packages or as a user-only
+   package. This choice will also be present as a command-line option to pip
+   so non-interactive use is possible.
+5. The bootstrap will and contact PyPI to obtain the latest download wheel
+   file (see PEP 427.)
+6. Upon downloading the file it is installed using "python setup.py install".
+7. The pip tool may now import the pip implementation and continues to
    process the requested user command normally.
 
 Users may be running in an environment which cannot access the public
 Internet and are relying solely on a local package repository.  They
 would use the "-i" (Base URL of Python Package Index) argument to the
-"pip3 install" command.  This use case will be handled by:
+"pip3 install" command. This simply overrides the default index URL pointing
+to PyPI.
 
-1. Recognising the command-line arguments that specify alternative or
-   additional locations to discover packages and attempting to
-   download the package from those locations.
-2. If the package is not found there then we attempt to donwload it
-   using the standard "https://pypi.python.org/pypi/simple/pip" index.
-3. If that also fails, for any reason, we indicate to the user the
-   operation we were attempting, the reason for failure (if we know
-   it) and display further instructions for downloading and installing
-   the file manually.
+Some users may have no Internet access suitable for fetching the pip
+implementation file. These users can manually download and install the
+setuptools and pip tar files. Adding specific support for this use-case is
+unnecessary.
 
-Manual installation of the pip implementation will be supported
-through the manual download of the wheel file and "pip3 install
-<downloaded wheel file>".
-
-This installation will not perform standard pip installation steps of
-saving the file to a cache directory or updating any local database of
-installed files.
-
-The download of the pip implementation install file should be performed
-securely.  The transport from pypi.python.org will be done over HTTPS but the CA
-certificate check will most likely not be performed, and therefore the download
-would still be vulnerable to active MITM attacks.  To mitigate this
-risk we will use the embedded signature support in the wheel format to validate
-the downloaded file.
+The download of the pip implementation install file will be performed
+securely.  The transport from pypi.python.org will be done over HTTPS with the
+CA certificate check performed. This facility will be present in Python 3.4+
+using Operating System certificates (see PEP XXXX).
 
 Beyond those arguments controlling index location and download
 options, the "pip3" boostrap command may support further standard pip
 options for verbosity, quietness and logging.
 
+The "pip3" command will support two new command-line options that are used
+in the boostrapping, and otherwise ignored. They control where the pip
+implementation is installed:
+
+--bootstrap
+  Install to the user's packages directory. The name of this option is chosen
+  to promote it as the preferred installation option.
+
+--bootstrap-to-system
+  Install to the system site-packages directory.
+
+These command-line options will also need to be implemented, but otherwise
+ignored, in the pip implementation.
+
+Consideration should be given to defaulting pip to install packages to the
+user's packages directory if pip is installed in that location.
+
 The "--no-install" option to the "pip3" command will not affect the
 bootstrapping process.
 
-setuptools
-----------
-
-The deprecation of requiring setuptools for installation is an existing goal of
-the packaging comminity (TODO ref needed). Currently pip depends upon setuptools
-functionality, and it is installed by the current pip boostrap. This PEP does
-not propose installing setuptools during the new bootstrap.
-
-It is intended that before Python 3.4 is shipped the functionlity required by
-pip will be present in Python's standard library as the distlib module, and that
-pip would be modified to use that functionality when present. TODO PEP reference
-for distlib
-
-Many existing "setup.py" files require setuptools to be installed (because one
-of the first things they do is import setuptools). It is intended that pip's
-behaviour will be either:
-
-1. If setuptools is not present it can only install from wheel files and
-   sdists with 2.0+ metadata, or
-2. If setuptools is present it can also install from sdists with legacy
-   metadata and eggs
-
-By default, installing setuptools when necessary should be automatic so that
-users are not inconvenienced, but advanced users should be able to ask that it
-instead be treated as an error if no wheel is available to satisfy an
-installation request or dependency (so they don't inadvertently install
-setuptools on their production systems if they don't want to).
-
 
 Modifications to publishing packages
 ------------------------------------
@@ -189,22 +180,36 @@
 ==============
 
 The changes to pip required by this PEP are being tracked in that project's
-issue tracker [2]_
+issue tracker [2]_. Most notably, the addition of --bootstrap and --bootstrap-
+to-system to the pip command-line.
+
+It would be preferable that the pip and setuptools projects distribute a wheel
+format download.
+
+The required code for this implementation is the "pip3" command described
+above. The additional pypublish can be developed outside of the scope of this
+PEP's work.
+
+Finally, it would be desirable that "pip3" be ported to Python 2.6+ to allow
+the single command to replace existing pip, setuptools and virtualenv (which
+would be added to the bootstrap) bootstrap scripts. Having that bootstrap
+included in a future Python 2.7 release would also be highly desirable.
 
 
 Risks
 =====
 
-The Fedora variant of Linux has had a separate program called "pip" (a
-Perl package installer) available for install for some time.  The
-current Python "pip" program is installed as "pip-python".  It is
-hoped that the Fedora community will resolve this issue by renaming
-the Perl installer.
-
 The key that is used to sign the pip implementation download might be
 compromised and this PEP currently proposes no mechanism for key
 revocation.
 
+There is a Perl package installer also named "pip". It is quite rare and not
+commonly used. The Fedora variant of Linux has historically named Python's
+"pip" as "python-pip" and Perl's "pip" as "perl-pip".  This policy has been
+altered[3] so that future and upgraded Fedora installations will use the name
+"pip" for Python's "pip". Existing (non-upgraded) installations will still
+have the old name for the Python "pip", though the potential for confusion is
+now much reduced.
 
 
 References
@@ -216,6 +221,9 @@
 .. [2] pip issue tracking work needed for this PEP
        https://github.com/pypa/pip/issues/863
 
+.. [3] Fedora's python-pip package does not provide /usr/bin/pip
+       https://bugzilla.redhat.com/show_bug.cgi?id=958377
+
 
 Acknowledgments
 ===============
@@ -223,7 +231,9 @@
 Nick Coghlan for his thoughts on the proposal and dealing with the Red
 Hat issue.
 
-Jannis Leidel and Carl Meyer for their thoughts.
+Jannis Leidel and Carl Meyer for their thoughts. Marcus Smith for feedback.
+
+Marcela Mašláňová for resolving the Fedora issue.
 
 
 Copyright
diff --git a/pep-0440.txt b/pep-0440.txt
--- a/pep-0440.txt
+++ b/pep-0440.txt
@@ -9,7 +9,7 @@
 Type: Standards Track
 Content-Type: text/x-rst
 Created: 18 Mar 2013
-Post-History: 30 Mar 2013, 27-May-2013
+Post-History: 30 Mar 2013, 27 May 2013, 20 Jun 2013
 Replaces: 386
 
 
@@ -27,7 +27,7 @@
    This PEP was broken out of the metadata 2.0 specification in PEP 426.
 
    Unlike PEP 426, the notes that remain in this document are intended as
-   part of the final specification.
+   part of the final specification (except for this one).
 
 
 Definitions
@@ -40,7 +40,7 @@
 The following terms are to be interpreted as described in PEP 426:
 
 * "Distributions"
-* "Versions"
+* "Releases"
 * "Build tools"
 * "Index servers"
 * "Publication tools"
@@ -52,9 +52,13 @@
 Version scheme
 ==============
 
-Distribution versions are identified by both a public version identifier,
-which supports all defined version comparison operations, and a build
-label, which supports only strict equality comparisons.
+Distributions are identified by a public version identifier which
+supports all defined version comparison operations
+
+Distributions may also define a source label, which is not used by
+automated tools. Source labels are useful when a project internal
+versioning scheme requires translation to create a compliant public
+version identifier.
 
 The version scheme is used both to describe the distribution version
 provided by a particular distribution archive, as well as to place
@@ -84,7 +88,7 @@
 * Post-release segment: ``.postN``
 * Development release segment: ``.devN``
 
-Any given version will be a "release", "pre-release", "post-release" or
+Any given release will be a "final release", "pre-release", "post-release" or
 "developmental release" as defined in the following sections.
 
 .. note::
@@ -99,34 +103,43 @@
    sections.
 
 
-Build labels
-------------
+Source labels
+-------------
 
-Build labels are text strings with minimal defined semantics.
+Source labels are text strings with minimal defined semantics.
 
-To ensure build labels can be readily incorporated as part of file names
-and URLs, they MUST be comprised of only ASCII alphanumerics, plus signs,
-periods and hyphens.
+To ensure source labels can be readily incorporated as part of file names
+and URLs, and to avoid formatting inconsistences in hexadecimal hash
+representations they MUST be limited to the following set of permitted
+characters:
 
-In addition, build labels MUST be unique within a given distribution.
+* Lowercase ASCII letters (``[a-z]``)
+* ASCII digits (``[0-9]``)
+* underscores (``_``)
+* hyphens (``-``)
+* periods (``.``)
+* plus signs (``+``)
 
-As with distribution names, all comparisons of build labels MUST be case
-insensitive.
+Source labels MUST start and end with an ASCII letter or digit.
 
+Source labels MUST be unique within each project and MUST NOT match any
+defined version for the project.
 
-Releases
---------
 
-A version identifier that consists solely of a release segment is termed
-a "release".
+Final releases
+--------------
 
-The release segment consists of one or more non-negative integer values,
-separated by dots::
+A version identifier that consists solely of a release segment is
+termed a "final release".
+
+The release segment consists of one or more non-negative integer
+values, separated by dots::
 
     N[.N]+
 
-Releases within a project will typically be numbered in a consistently
-increasing fashion.
+Final releases within a project MUST be numbered in a consistently
+increasing fashion, otherwise automated tools will not be able to upgrade
+them correctly.
 
 Comparison and ordering of release segments considers the numeric value
 of each component of the release segment in turn. When comparing release
@@ -157,8 +170,8 @@
     2.0
     2.0.1
 
-A release series is any set of release numbers that start with a common
-prefix. For example, ``3.3.1``, ``3.3.5`` and ``3.3.9.45`` are all
+A release series is any set of final release numbers that start with a
+common prefix. For example, ``3.3.1``, ``3.3.5`` and ``3.3.9.45`` are all
 part of the ``3.3`` release series.
 
 .. note::
@@ -206,8 +219,8 @@
 Post-releases
 -------------
 
-Some projects use post-releases to address minor errors in a release that
-do not affect the distributed software (for example, correcting an error
+Some projects use post-releases to address minor errors in a final release
+that do not affect the distributed software (for example, correcting an error
 in the release notes).
 
 If used as part of a project's development cycle, these post-releases are
@@ -371,7 +384,7 @@
    .devN, aN, bN, cN, rcN, <no suffix>, .postN
 
 Note that `rc` will always sort after `c` (regardless of the numeric
-component) although they are semantically equivalent. Tools are free to
+component) although they are semantically equivalent. Tools MAY
 reject this case as ambiguous and remain in compliance with the PEP.
 
 Within an alpha (``1.0a1``), beta (``1.0b1``), or release candidate
@@ -444,7 +457,7 @@
 
 Some projects may choose to use a version scheme which requires
 translation in order to comply with the public version scheme defined in
-this PEP. In such cases, the build label can be used to
+this PEP. In such cases, the source label can be used to
 record the project specific version as an arbitrary label, while the
 translated public version is published in the version field.
 
@@ -488,7 +501,7 @@
 permitted in the public version field.
 
 As with semantic versioning, the public ``.devN`` suffix may be used to
-uniquely identify such releases for publication, while the build label is
+uniquely identify such releases for publication, while the source label is
 used to record the original DVCS based version label.
 
 
@@ -496,7 +509,7 @@
 ~~~~~~~~~~~~~~~~~~~
 
 As with other incompatible version schemes, date based versions can be
-stored in the build label field. Translating them to a compliant
+stored in the source label field. Translating them to a compliant
 public version is straightforward: use a leading ``"0."`` prefix in the
 public version label, with the date based version number as the remaining
 components in the release segment.
@@ -506,6 +519,22 @@
 version comparison semantics.
 
 
+Olson database versioning
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ``pytz`` project inherits its versioning scheme from the corresponding
+Olson timezone database versioning scheme: the year followed by a lowercase
+character indicating the version of the database within that year.
+
+This can be translated to a compliant 3-part version identifier as
+``0.<year>.<serial>``, where the serial starts at zero (for the '<year>a'
+release) and is incremented with each subsequent database update within the
+year.
+
+As with other translated version identifiers, the corresponding Olson
+database version would be recorded in the source label field.
+
+
 Version specifiers
 ==================
 
@@ -521,7 +550,6 @@
 * ``~=``: `Compatible release`_ clause
 * ``==``: `Version matching`_ clause
 * ``!=``: `Version exclusion`_ clause
-* ``is``: `Build reference`_ clause
 * ``<=``, ``>=``: `Inclusive ordered comparison`_ clause
 * ``<``, ``>``: `Exclusive ordered comparison`_ clause
 
@@ -605,6 +633,11 @@
 release segment to ensure the release segments are compared with the same
 length.
 
+Whether or not strict version matching is appropriate depends on the specific
+use case for the version specifier. Automated tools SHOULD at least issue
+warnings and MAY reject them entirely when strict version matches are used
+inappropriately.
+
 Prefix matching may be requested instead of strict comparison, by appending
 a trailing ``.*`` to the version identifier in the version matching clause.
 This means that additional trailing segments will be ignored when
@@ -626,10 +659,6 @@
 dependencies for repeatable *deployments of applications* while using
 a shared distribution index.
 
-Publication tools and index servers SHOULD at least emit a warning when
-dependencies are pinned in this fashion and MAY refuse to allow publication
-of such overly specific dependencies.
-
 
 Version exclusion
 -----------------
@@ -649,74 +678,6 @@
     != 1.1.*      # Same prefix, so 1.1.post1 does not match clause
 
 
-Build reference
----------------
-
-A build reference includes the build reference operator ``is`` and
-a build label or a build URL.
-
-Publication tools and public index servers SHOULD NOT permit build
-references in dependency specifications.
-
-Installation tools SHOULD support the use of build references to identify
-dependencies.
-
-Build label matching works solely on strict equality comparisons: the
-candidate build label must be exactly the same as the build label in the
-version clause for the clause to match the candidate distribution.
-
-For example, a build reference could be used to depend on a ``hashdist``
-generated build of ``zlib`` with the ``hashdist`` hash used as a build
-label::
-
-    zlib (is d4jwf2sb2g6glprsdqfdpcracwpzujwq)
-
-A build URL is distinguished from a build label by the presence of
-``:`` and ``/`` characters in the build reference. As these characters
-are not permitted in build labels, they indicate that the reference uses
-a build URL.
-
-Some appropriate targets for a build URL are a binary archive, a
-source tarball, an sdist archive or a direct reference to a tag or
-specific commit in an online version control system. The exact URLs and
-targets supported will be installation tool specific.
-
-For example, a local prebuilt wheel file may be referenced directly::
-
-    exampledist (is file:///localbuilds/exampledist-1.0-py33-none-any.whl)
-
-All build URL references SHOULD either specify a local file URL, a secure
-transport mechanism (such as ``https``) or else include an expected hash
-value in the URL for verification purposes. If an insecure network
-transport is specified without any hash information (or with hash
-information that the tool doesn't understand), automated tools SHOULD
-at least emit a warning and MAY refuse to rely on the URL.
-
-It is RECOMMENDED that only hashes which are unconditionally provided by
-the latest version of the standard library's ``hashlib`` module be used
-for source archive hashes. At time of writing, that list consists of
-``'md5'``, ``'sha1'``, ``'sha224'``, ``'sha256'``, ``'sha384'``, and
-``'sha512'``.
-
-For binary or source archive references, an expected hash value may be
-specified by including a ``<hash-algorithm>=<expected-hash>`` as part of
-the URL fragment.
-
-For version control references, the ``VCS+protocol`` scheme SHOULD be
-used to identify both the version control system and the secure transport.
-
-To support version control systems that do not support including commit or
-tag references directly in the URL, that information may be appended to the
-end of the URL using the ``@<tag>`` notation.
-
-The use of ``is`` when defining dependencies for published distributions
-is strongly discouraged as it greatly complicates the deployment of
-security fixes. The build label matching operator is intended primarily
-for use when defining dependencies for repeatable *deployments of
-applications* while using a shared distribution index, as well as to
-reference dependencies which are not published through an index server.
-
-
 Inclusive ordered comparison
 ----------------------------
 
@@ -755,62 +716,108 @@
 ------------------------
 
 Pre-releases of any kind, including developmental releases, are implicitly
-excluded from all version specifiers, *unless* a pre-release or developmental
-release is explicitly mentioned in one of the clauses. For example, these
-specifiers implicitly exclude all pre-releases and development
-releases of later versions::
-
-    2.2
-    >= 1.0
-
-While these specifiers would include at least some of them::
-
-    2.2.dev0
-    2.2, != 2.3b2
-    >= 1.0a1
-    >= 1.0c1
-    >= 1.0, != 1.0b2
-    >= 1.0, < 2.0.dev123
+excluded from all version specifiers, *unless* they are already present
+on the system, explicitly requested by the user, or if the only available
+version that satisfies the version specifier is a pre-release.
 
 By default, dependency resolution tools SHOULD:
 
 * accept already installed pre-releases for all version specifiers
-* accept remotely available pre-releases for version specifiers which
-  include at least one version clauses that references a pre-release
+* accept remotely available pre-releases for version specifiers where
+  there is no final or post release that satisfies the version specifier
 * exclude all other pre-releases from consideration
 
+Dependency resolution tools MAY issue a warning if a pre-release is needed
+to satisfy a version specifier.
+
 Dependency resolution tools SHOULD also allow users to request the
 following alternative behaviours:
 
 * accepting pre-releases for all version specifiers
 * excluding pre-releases for all version specifiers (reporting an error or
-  warning if a pre-release is already installed locally)
+  warning if a pre-release is already installed locally, or if a
+  pre-release is the only way to satisfy a particular specifier)
 
 Dependency resolution tools MAY also allow the above behaviour to be
 controlled on a per-distribution basis.
 
-Post-releases and purely numeric releases receive no special treatment in
-version specifiers - they are always included unless explicitly excluded.
+Post-releases and final releases receive no special treatment in version
+specifiers - they are always included unless explicitly excluded.
 
 
 Examples
 --------
 
-* ``3.1``: version 3.1 or later, but not
-   version 4.0 or later. Excludes pre-releases and developmental releases.
-* ``3.1.2``: version 3.1.2 or later, but not
-   version 3.2.0 or later. Excludes pre-releases and developmental releases.
-* ``3.1a1``: version 3.1a1 or later, but not
-   version 4.0 or later. Allows pre-releases like 3.2a4 and developmental
-   releases like 3.2.dev1.
+* ``3.1``: version 3.1 or later, but not version 4.0 or later.
+* ``3.1.2``: version 3.1.2 or later, but not version 3.2.0 or later.
+* ``3.1a1``: version 3.1a1 or later, but not version 4.0 or later.
 * ``== 3.1``: specifically version 3.1 (or 3.1.0), excludes all pre-releases,
   post releases, developmental releases and any 3.1.x maintenance releases.
-* ``== 3.1.*``: any version that starts with 3.1, excluding pre-releases and
-  developmental releases. Equivalent to the ``3.1.0`` compatible release
-  clause.
+* ``== 3.1.*``: any version that starts with 3.1. Equivalent to the
+  ``3.1.0`` compatible release clause.
 * ``3.1.0, != 3.1.3``: version 3.1.0 or later, but not version 3.1.3 and
-  not version 3.2.0 or later. Excludes pre-releases and developmental
-  releases.
+  not version 3.2.0 or later.
+
+
+Direct references
+=================
+
+Some automated tools may permit the use of a direct reference as an
+alternative to a normal version specifier. A direct reference consists of
+the word ``from`` and an explicit URL.
+
+Whether or not direct references are appropriate depends on the specific
+use case for the version specifier. Automated tools SHOULD at least issue
+warnings and MAY reject them entirely when direct references are used
+inappropriately.
+
+Public index servers SHOULD NOT allow the use of direct references in
+uploaded distributions. Direct references are intended as a tool for
+software integrators rather than publishers.
+
+Depending on the use case, some appropriate targets for a direct URL
+reference may be a valid ``source_url`` entry (see PEP 426), an sdist, or
+a wheel binary archive. The exact URLs and targets supported will be tool
+dependent.
+
+For example, a local source archive may be referenced directly::
+
+    pip (from file:///localbuilds/pip-1.3.1.zip)
+
+Alternatively, a prebuilt archive may also be referenced::
+
+    pip (from file:///localbuilds/pip-1.3.1-py33-none-any.whl)
+
+All direct references that do not refer to a local file URL SHOULD
+specify a secure transport mechanism (such as ``https``), include an
+expected hash value in the URL for verification purposes, or both. If an
+insecure transport is specified without any hash information, with hash
+information that the tool doesn't understand, or with a selected hash
+algorithm that the tool considers too weak to trust, automated tools
+SHOULD at least emit a warning and MAY refuse to rely on the URL.
+
+It is RECOMMENDED that only hashes which are unconditionally provided by
+the latest version of the standard library's ``hashlib`` module be used
+for source archive hashes. At time of writing, that list consists of
+``'md5'``, ``'sha1'``, ``'sha224'``, ``'sha256'``, ``'sha384'``, and
+``'sha512'``.
+
+For source archive and wheel references, an expected hash value may be
+specified by including a ``<hash-algorithm>=<expected-hash>`` entry as
+part of the URL fragment.
+
+Version control references, the ``VCS+protocol`` scheme SHOULD be
+used to identify both the version control system and the secure transport.
+
+To support version control systems that do not support including commit or
+tag references directly in the URL, that information may be appended to the
+end of the URL using the ``@<tag>`` notation.
+
+Remote URL examples::
+
+    pip (from https://github.com/pypa/pip/archive/1.3.1.zip)
+    pip (from http://github.com/pypa/pip/archive/1.3.1.zip#sha1=da9234ee9982d4bbb3c72346a6de940a148ea686)
+    pip (from git+https://github.com/pypa/pip.git@1.3.1)
 
 
 Updating the versioning specification
@@ -823,56 +830,45 @@
 versioning scheme and metadata version defined in new PEPs.
 
 
-Open issues
-===========
-
-* The new ``is`` operator seems like a reasonable way to cleanly allow
-  installation tools to bring in non-published dependencies, while heavily
-  discouraging the practice for published libraries. It also makes
-  build labels more useful by allowing them to be used to pin dependencies
-  in the integration use case.
-
-  However, it's an early draft of the idea, so feedback is definitely
-  welcome.
-
-
 Summary of differences from \PEP 386
 ====================================
 
 * Moved the description of version specifiers into the versioning PEP
 
-* added the "build label" concept to better handle projects that wish to
+* Added the "source label" concept to better handle projects that wish to
   use a non-compliant versioning scheme internally, especially those based
   on DVCS hashes
-  
-* added the "compatible release" clause
 
-* added the "build reference" clause
+* Added the "direct reference" concept as a standard notation for direct
+  references to resources (rather than each tool needing to invents its own)
 
-* added the trailing wildcard syntax for prefix based version matching
+* Added the "compatible release" clause
+
+* Added the trailing wildcard syntax for prefix based version matching
   and exclusion
 
-* changed the top level sort position of the ``.devN`` suffix
+* Changed the top level sort position of the ``.devN`` suffix
 
-* allowed single value version numbers
+* Allowed single value version numbers
 
-* explicit exclusion of leading or trailing whitespace
+* Explicit exclusion of leading or trailing whitespace
 
-* explicit criterion for the exclusion of date based versions
+* Explicit criterion for the exclusion of date based versions
 
-* implicitly exclude pre-releases unless explicitly requested
+* Implicitly exclude pre-releases unless they're already present or
+  needed to satisfy a dependency
 
-* treat post releases the same way as unqualified releases
+* Treat post releases the same way as unqualified releases
 
 * Discuss ordering and dependencies across metadata versions
 
 The rationale for major changes is given in the following sections.
 
 
-Adding build labels
--------------------
+Adding source labels
+--------------------
 
-The new build label support is intended to make it clearer that the
+The new source label support is intended to make it clearer that the
 constraints on public version identifiers are there primarily to aid in
 the creation of reliable automated dependency analysis tools. Projects
 are free to use whatever versioning scheme they like internally, so long
@@ -1011,11 +1007,12 @@
 specifiers for no adequately justified reason.
 
 The updated interpretation is intended to make it difficult to accidentally
-accept a pre-release version as satisfying a dependency, while allowing
-pre-release versions to be explicitly requested when needed.
+accept a pre-release version as satisfying a dependency, while still
+allowing pre-release versions to be retrieved automatically when that's the
+only way to satisfy a dependency.
 
 The "some forward compatibility assumed" default version constraint is
-taken directly from the Ruby community's "pessimistic version constraint"
+derived from the Ruby community's "pessimistic version constraint"
 operator [2]_ to allow projects to take a cautious approach to forward
 compatibility promises, while still easily setting a minimum required
 version for their dependencies. It is made the default behaviour rather
@@ -1038,16 +1035,26 @@
 
 The trailing wildcard syntax to request prefix based version matching was
 added to make it possible to sensibly define both compatible release clauses
-and the desired pre-release handling semantics for ``<`` and ``>`` ordered
-comparison clauses.
+and the desired pre- and post-release handling semantics for ``<`` and ``>``
+ordered comparison clauses.
 
-Build references are added for two purposes. In conjunction with build
-labels, they allow hash based references, such as those employed by
-`hashdist <http://hashdist.readthedocs.org/en/latest/build_spec.html>`__,
-or generated from version control. In conjunction with build URLs, they
-allow the new metadata standard to natively support an existing feature of
-``pip``, which allows arbitrary URLs like
-``file:///localbuilds/exampledist-1.0-py33-none-any.whl``.
+
+Adding direct references
+------------------------
+
+Direct references are added as an "escape clause" to handle messy real
+world situations that don't map neatly to the standard distribution model.
+This includes dependencies on unpublished software for internal use, as well
+as handling the more complex compatibility issues that may arise when
+wrapping third party libraries as C extensions (this is of especial concern
+to the scientific community).
+
+Index servers are deliberately given a lot of freedom to disallow direct
+references, since they're intended primarily as a tool for integrators
+rather than publishers. PyPI in particular is currently going through the
+process of *eliminating* dependencies on external references, as unreliable
+external services have the effect of slowing down installation operations,
+as well as reducing PyPI's own apparent reliability.
 
 
 References
diff --git a/pep-0442.txt b/pep-0442.txt
--- a/pep-0442.txt
+++ b/pep-0442.txt
@@ -4,13 +4,13 @@
 Last-Modified: $Date$
 Author: Antoine Pitrou <solipsis at pitrou.net>
 BDFL-Delegate: Benjamin Peterson <benjamin at python.org>
-Status: Draft
+Status: Final
 Type: Standards Track
 Content-Type: text/x-rst
 Created: 2013-05-18
 Python-Version: 3.4
 Post-History: 2013-05-18
-Resolution: TBD
+Resolution: http://mail.python.org/pipermail/python-dev/2013-June/126746.html
 
 
 Abstract
@@ -201,8 +201,7 @@
 --------------
 
 Following this scheme, an object's finalizer is always called exactly
-once.  The only exception is if an object is resurrected: the finalizer
-will be called again when the object becomes unreachable again.
+once, even if it was resurrected afterwards.
 
 For CI objects, the order in which finalizers are called (step 2 above)
 is undefined.
diff --git a/pep-0443.txt b/pep-0443.txt
--- a/pep-0443.txt
+++ b/pep-0443.txt
@@ -4,11 +4,11 @@
 Last-Modified: $Date$
 Author: Łukasz Langa <lukasz at langa.pl>
 Discussions-To: Python-Dev <python-dev at python.org>
-Status: Draft
+Status: Final
 Type: Standards Track
 Content-Type: text/x-rst
 Created: 22-May-2013
-Post-History: 22-May-2013, 25-May-2013
+Post-History: 22-May-2013, 25-May-2013, 31-May-2013
 Replaces: 245, 246, 3124
 
 
@@ -44,11 +44,14 @@
 
 In addition, it is currently a common anti-pattern for Python code to
 inspect the types of received arguments, in order to decide what to do
-with the objects. For example, code may wish to accept either an object
+with the objects.
+
+For example, code may wish to accept either an object
 of some type, or a sequence of objects of that type.
+Currently, the "obvious way" to do this is by type inspection, but this
+is brittle and closed to extension.
 
-Currently, the "obvious way" to do this is by type inspection, but this
-is brittle and closed to extension. Abstract Base Classes make it easier
+Abstract Base Classes make it easier
 to discover present behaviour, but don't help adding new behaviour.
 A developer using an already-written library may be unable to change how
 their objects are treated by such code, especially if the objects they
@@ -63,7 +66,7 @@
 
 To define a generic function, decorate it with the ``@singledispatch``
 decorator. Note that the dispatch happens on the type of the first
-argument, create your function accordingly::
+argument. Create your function accordingly::
 
   >>> from functools import singledispatch
   >>> @singledispatch
@@ -73,7 +76,7 @@
   ...     print(arg)
 
 To add overloaded implementations to the function, use the
-``register()`` attribute of the generic function. It is a decorator,
+``register()`` attribute of the generic function. This is a decorator,
 taking a type parameter and decorating a function implementing the
 operation for that type::
 
@@ -98,7 +101,7 @@
   ...
   >>> fun.register(type(None), nothing)
 
-The ``register()`` attribute returns the undecorated function which
+The ``register()`` attribute returns the undecorated function. This
 enables decorator stacking, pickling, as well as creating unit tests for
 each variant independently::
 
@@ -134,13 +137,17 @@
 
 Where there is no registered implementation for a specific type, its
 method resolution order is used to find a more generic implementation.
+The original function decorated with ``@singledispatch`` is registered
+for the base ``object`` type, which means it is used if no better
+implementation is found.
+
 To check which implementation will the generic function choose for
 a given type, use the ``dispatch()`` attribute::
 
   >>> fun.dispatch(float)
   <function fun_num at 0x104319058>
-  >>> fun.dispatch(dict)
-  <function fun at 0x103fe4788>
+  >>> fun.dispatch(dict)    # note: default implementation
+  <function fun at 0x103fe0000>
 
 To access all registered implementations, use the read-only ``registry``
 attribute::
@@ -152,7 +159,7 @@
   >>> fun.registry[float]
   <function fun_num at 0x1035a2840>
   >>> fun.registry[object]
-  <function fun at 0x103170788>
+  <function fun at 0x103fe0000>
 
 The proposed API is intentionally limited and opinionated, as to ensure
 it is easy to explain and use, as well as to maintain consistency with
@@ -168,12 +175,12 @@
 reference implementation is available on hg.python.org [#ref-impl]_.
 
 The dispatch type is specified as a decorator argument. An alternative
-form using function annotations has been considered but its inclusion
-has been deferred. As of May 2013, this usage pattern is out of scope
-for the standard library [#pep-0008]_ and the best practices for
+form using function annotations was considered but its inclusion
+has been rejected. As of May 2013, this usage pattern is out of scope
+for the standard library [#pep-0008]_, and the best practices for
 annotation usage are still debated.
 
-Based on the current ``pkgutil.simplegeneric`` implementation and
+Based on the current ``pkgutil.simplegeneric`` implementation, and
 following the convention on registering virtual subclasses on Abstract
 Base Classes, the dispatch registry will not be thread-safe.
 
@@ -186,48 +193,37 @@
 importantly, it introduces support for Abstract Base Classes (ABC).
 
 When a generic function implementation is registered for an ABC, the
-dispatch algorithm switches to a mode of MRO calculation for the
-provided argument which includes the relevant ABCs. The algorithm is as
-follows::
+dispatch algorithm switches to an extended form of C3 linearization,
+which includes the relevant ABCs in the MRO of the provided argument.
+The algorithm inserts ABCs where their functionality is introduced, i.e.
+``issubclass(cls, abc)`` returns ``True`` for the class itself but
+returns ``False`` for all its direct base classes. Implicit ABCs for
+a given class (either registered or inferred from the presence of
+a special method like ``__len__()``) are inserted directly after the
+last ABC explicitly listed in the MRO of said class.
 
-  def _compose_mro(cls, haystack):
-      """Calculates the MRO for a given class `cls`, including relevant
-      abstract base classes from `haystack`."""
-      bases = set(cls.__mro__)
-      mro = list(cls.__mro__)
-      for regcls in haystack:
-          if regcls in bases or not issubclass(cls, regcls):
-              continue   # either present in the __mro__ or unrelated
-          for index, base in enumerate(mro):
-              if not issubclass(base, regcls):
-                  break
-          if base in bases and not issubclass(regcls, base):
-              # Conflict resolution: put classes present in __mro__
-              # and their subclasses first.
-              index += 1
-          mro.insert(index, regcls)
-      return mro
-
-In its most basic form, it returns the MRO for the given type::
+In its most basic form, this linearization returns the MRO for the given
+type::
 
   >>> _compose_mro(dict, [])
   [<class 'dict'>, <class 'object'>]
 
-When the haystack consists of ABCs that the specified type is a subclass
-of, they are inserted in a predictable order::
+When the second argument contains ABCs that the specified type is
+a subclass of, they are inserted in a predictable order::
 
   >>> _compose_mro(dict, [Sized, MutableMapping, str,
   ...                     Sequence, Iterable])
   [<class 'dict'>, <class 'collections.abc.MutableMapping'>,
-   <class 'collections.abc.Iterable'>, <class 'collections.abc.Sized'>,
+   <class 'collections.abc.Mapping'>, <class 'collections.abc.Sized'>,
+   <class 'collections.abc.Iterable'>, <class 'collections.abc.Container'>,
    <class 'object'>]
 
 While this mode of operation is significantly slower, all dispatch
 decisions are cached. The cache is invalidated on registering new
 implementations on the generic function or when user code calls
-``register()`` on an ABC to register a new virtual subclass. In the
-latter case, it is possible to create a situation with ambiguous
-dispatch, for instance::
+``register()`` on an ABC to implicitly subclass it. In the latter case,
+it is possible to create a situation with ambiguous dispatch, for
+instance::
 
   >>> from collections import Iterable, Container
   >>> class P:
@@ -254,27 +250,45 @@
   RuntimeError: Ambiguous dispatch: <class 'collections.abc.Container'>
   or <class 'collections.abc.Iterable'>
 
-Note that this exception would not be raised if ``Iterable`` and
-``Container`` had been provided as base classes during class definition.
-In this case dispatch happens in the MRO order::
+Note that this exception would not be raised if one or more ABCs had
+been provided explicitly as base classes during class definition. In
+this case dispatch happens in the MRO order::
 
   >>> class Ten(Iterable, Container):
   ...     def __iter__(self):
   ...         for i in range(10):
   ...             yield i
   ...     def __contains__(self, value):
-  ...       return value in range(10)
+  ...         return value in range(10)
   ...
   >>> g(Ten())
   'iterable'
 
+A similar conflict arises when subclassing an ABC is inferred from the
+presence of a special method like ``__len__()`` or ``__contains__()``::
+
+  >>> class Q:
+  ...   def __contains__(self, value):
+  ...     return False
+  ...
+  >>> issubclass(Q, Container)
+  True
+  >>> Iterable.register(Q)
+  >>> g(Q())
+  Traceback (most recent call last):
+  ...
+  RuntimeError: Ambiguous dispatch: <class 'collections.abc.Container'>
+  or <class 'collections.abc.Iterable'>
+
+An early version of the PEP contained a custom approach that was simpler
+but created a number of edge cases with surprising results [#why-c3]_.
 
 Usage Patterns
 ==============
 
 This PEP proposes extending behaviour only of functions specifically
 marked as generic. Just as a base class method may be overridden by
-a subclass, so too may a function be overloaded to provide custom
+a subclass, so too a function may be overloaded to provide custom
 functionality for a given type.
 
 Universal overloading does not equal *arbitrary* overloading, in the
@@ -371,6 +385,8 @@
    a particular annotation style".
    (http://www.python.org/dev/peps/pep-0008)
 
+.. [#why-c3] http://bugs.python.org/issue18244
+
 .. [#pep-3124] http://www.python.org/dev/peps/pep-3124/
 
 .. [#peak-rules] http://peak.telecommunity.com/DevCenter/PEAK_2dRules
diff --git a/pep-0445.txt b/pep-0445.txt
new file mode 100644
--- /dev/null
+++ b/pep-0445.txt
@@ -0,0 +1,773 @@
+PEP: 445
+Title: Add new APIs to customize Python memory allocators
+Version: $Revision$
+Last-Modified: $Date$
+Author: Victor Stinner <victor.stinner at gmail.com>
+BDFL-Delegate: Antoine Pitrou <solipsis at pitrou.net>
+Status: Accepted
+Type: Standards Track
+Content-Type: text/x-rst
+Created: 15-june-2013
+Python-Version: 3.4
+Resolution: http://mail.python.org/pipermail/python-dev/2013-July/127222.html
+
+Abstract
+========
+
+This PEP proposes new Application Programming Interfaces (API) to customize
+Python memory allocators.  The only implementation required to conform to
+this PEP is CPython, but other implementations may choose to be compatible,
+or to re-use a similar scheme.
+
+
+Rationale
+=========
+
+Use cases:
+
+* Applications embedding Python which want to isolate Python memory from
+  the memory of the application, or want to use a different memory
+  allocator optimized for its Python usage
+* Python running on embedded devices with low memory and slow CPU.
+  A custom memory allocator can be used for efficiency and/or to get
+  access all the memory of the device.
+* Debug tools for memory allocators:
+
+  - track the memory usage (find memory leaks)
+  - get the location of a memory allocation: Python filename and line
+    number, and the size of a memory block
+  - detect buffer underflow, buffer overflow and misuse of Python
+    allocator APIs (see `Redesign Debug Checks on Memory Block
+    Allocators as Hooks`_)
+  - force memory allocations to fail to test handling of the
+    ``MemoryError`` exception
+
+
+Proposal
+========
+
+New Functions and Structures
+----------------------------
+
+* Add a new GIL-free (no need to hold the GIL) memory allocator:
+
+  - ``void* PyMem_RawMalloc(size_t size)``
+  - ``void* PyMem_RawRealloc(void *ptr, size_t new_size)``
+  - ``void PyMem_RawFree(void *ptr)``
+  - The newly allocated memory will not have been initialized in any
+    way.
+  - Requesting zero bytes returns a distinct non-*NULL* pointer if
+    possible, as if ``PyMem_Malloc(1)`` had been called instead.
+
+* Add a new ``PyMemAllocator`` structure::
+
+    typedef struct {
+        /* user context passed as the first argument to the 3 functions */
+        void *ctx;
+
+        /* allocate a memory block */
+        void* (*malloc) (void *ctx, size_t size);
+
+        /* allocate or resize a memory block */
+        void* (*realloc) (void *ctx, void *ptr, size_t new_size);
+
+        /* release a memory block */
+        void (*free) (void *ctx, void *ptr);
+    } PyMemAllocator;
+
+* Add a new ``PyMemAllocatorDomain`` enum to choose the Python
+  allocator domain. Domains:
+
+  - ``PYMEM_DOMAIN_RAW``: ``PyMem_RawMalloc()``, ``PyMem_RawRealloc()``
+    and ``PyMem_RawFree()``
+
+  - ``PYMEM_DOMAIN_MEM``: ``PyMem_Malloc()``, ``PyMem_Realloc()`` and
+    ``PyMem_Free()``
+
+  - ``PYMEM_DOMAIN_OBJ``: ``PyObject_Malloc()``, ``PyObject_Realloc()``
+    and ``PyObject_Free()``
+
+* Add new functions to get and set memory block allocators:
+
+  - ``void PyMem_GetAllocator(PyMemAllocatorDomain domain, PyMemAllocator *allocator)``
+  - ``void PyMem_SetAllocator(PyMemAllocatorDomain domain, PyMemAllocator *allocator)``
+  - The new allocator must return a distinct non-*NULL* pointer when
+    requesting zero bytes
+  - For the ``PYMEM_DOMAIN_RAW`` domain, the allocator must be
+    thread-safe: the GIL is not held when the allocator is called.
+
+* Add a new ``PyObjectArenaAllocator`` structure::
+
+    typedef struct {
+        /* user context passed as the first argument to the 2 functions */
+        void *ctx;
+
+        /* allocate an arena */
+        void* (*alloc) (void *ctx, size_t size);
+
+        /* release an arena */
+        void (*free) (void *ctx, void *ptr, size_t size);
+    } PyObjectArenaAllocator;
+
+* Add new functions to get and set the arena allocator used by
+  *pymalloc*:
+
+  - ``void PyObject_GetArenaAllocator(PyObjectArenaAllocator *allocator)``
+  - ``void PyObject_SetArenaAllocator(PyObjectArenaAllocator *allocator)``
+
+* Add a new function to reinstall the debug checks on memory allocators when
+  a memory allocator is replaced with ``PyMem_SetAllocator()``:
+
+  - ``void PyMem_SetupDebugHooks(void)``
+  - Install the debug hooks on all memory block allocators. The function can be
+    called more than once, hooks are only installed once.
+  - The function does nothing is Python is not compiled in debug mode.
+
+* Memory block allocators always return *NULL* if *size* is greater than
+  ``PY_SSIZE_T_MAX``. The check is done before calling the inner
+  function.
+
+.. note::
+    The *pymalloc* allocator is optimized for objects smaller than 512 bytes
+    with a short lifetime. It uses memory mappings with a fixed size of 256
+    KB called "arenas".
+
+Here is how the allocators are set up by default:
+
+* ``PYMEM_DOMAIN_RAW``, ``PYMEM_DOMAIN_MEM``: ``malloc()``,
+  ``realloc()`` and ``free()``; call ``malloc(1)`` when requesting zero
+  bytes
+* ``PYMEM_DOMAIN_OBJ``: *pymalloc* allocator which falls back on
+  ``PyMem_Malloc()`` for allocations larger than 512 bytes
+* *pymalloc* arena allocator: ``VirtualAlloc()`` and ``VirtualFree()`` on
+  Windows, ``mmap()`` and ``munmap()`` when available, or ``malloc()``
+  and ``free()``
+
+
+Redesign Debug Checks on Memory Block Allocators as Hooks
+---------------------------------------------------------
+
+Since Python 2.3, Python implements different checks on memory
+allocators in debug mode:
+
+* Newly allocated memory is filled with the byte ``0xCB``, freed memory
+  is filled with the byte ``0xDB``.
+* Detect API violations, ex: ``PyObject_Free()`` called on a memory
+  block allocated by ``PyMem_Malloc()``
+* Detect write before the start of the buffer (buffer underflow)
+* Detect write after the end of the buffer (buffer overflow)
+
+In Python 3.3, the checks are installed by replacing ``PyMem_Malloc()``,
+``PyMem_Realloc()``, ``PyMem_Free()``, ``PyObject_Malloc()``,
+``PyObject_Realloc()`` and ``PyObject_Free()`` using macros. The new
+allocator allocates a larger buffer and writes a pattern to detect buffer
+underflow, buffer overflow and use after free (by filling the buffer with
+the byte ``0xDB``). It uses the original ``PyObject_Malloc()``
+function to allocate memory. So ``PyMem_Malloc()`` and
+``PyMem_Realloc()`` indirectly call``PyObject_Malloc()`` and
+``PyObject_Realloc()``.
+
+This PEP redesigns the debug checks as hooks on the existing allocators
+in debug mode. Examples of call traces without the hooks:
+
+* ``PyMem_RawMalloc()`` => ``_PyMem_RawMalloc()`` => ``malloc()``
+* ``PyMem_Realloc()`` => ``_PyMem_RawRealloc()`` => ``realloc()``
+* ``PyObject_Free()`` => ``_PyObject_Free()``
+
+Call traces when the hooks are installed (debug mode):
+
+* ``PyMem_RawMalloc()`` => ``_PyMem_DebugMalloc()``
+  => ``_PyMem_RawMalloc()`` => ``malloc()``
+* ``PyMem_Realloc()`` => ``_PyMem_DebugRealloc()``
+  => ``_PyMem_RawRealloc()`` => ``realloc()``
+* ``PyObject_Free()`` => ``_PyMem_DebugFree()``
+  => ``_PyObject_Free()``
+
+As a result, ``PyMem_Malloc()`` and ``PyMem_Realloc()`` now call
+``malloc()`` and ``realloc()`` in both release mode and debug mode,
+instead of calling ``PyObject_Malloc()`` and ``PyObject_Realloc()`` in
+debug mode.
+
+When at least one memory allocator is replaced with
+``PyMem_SetAllocator()``, the ``PyMem_SetupDebugHooks()`` function must
+be called to reinstall the debug hooks on top on the new allocator.
+
+
+Don't call malloc() directly anymore
+------------------------------------
+
+``PyObject_Malloc()`` falls back on ``PyMem_Malloc()`` instead of
+``malloc()`` if size is greater or equal than 512 bytes, and
+``PyObject_Realloc()`` falls back on ``PyMem_Realloc()`` instead of
+``realloc()``
+
+Direct calls to ``malloc()`` are replaced with ``PyMem_Malloc()``, or
+``PyMem_RawMalloc()`` if the GIL is not held.
+
+External libraries like zlib or OpenSSL can be configured to allocate memory
+using ``PyMem_Malloc()`` or ``PyMem_RawMalloc()``. If the allocator of a
+library can only be replaced globally (rather than on an object-by-object
+basis), it shouldn't be replaced when Python is embedded in an application.
+
+For the "track memory usage" use case, it is important to track memory
+allocated in external libraries to have accurate reports, because these
+allocations can be large (e.g. they can raise a ``MemoryError`` exception)
+and would otherwise be missed in memory usage reports.
+
+
+Examples
+========
+
+Use case 1: Replace Memory Allocators, keep pymalloc
+----------------------------------------------------
+
+Dummy example wasting 2 bytes per memory block,
+and 10 bytes per *pymalloc* arena::
+
+    #include <stdlib.h>
+
+    size_t alloc_padding = 2;
+    size_t arena_padding = 10;
+
+    void* my_malloc(void *ctx, size_t size)
+    {
+        int padding = *(int *)ctx;
+        return malloc(size + padding);
+    }
+
+    void* my_realloc(void *ctx, void *ptr, size_t new_size)
+    {
+        int padding = *(int *)ctx;
+        return realloc(ptr, new_size + padding);
+    }
+
+    void my_free(void *ctx, void *ptr)
+    {
+        free(ptr);
+    }
+
+    void* my_alloc_arena(void *ctx, size_t size)
+    {
+        int padding = *(int *)ctx;
+        return malloc(size + padding);
+    }
+
+    void my_free_arena(void *ctx, void *ptr, size_t size)
+    {
+        free(ptr);
+    }
+
+    void setup_custom_allocator(void)
+    {
+        PyMemAllocator alloc;
+        PyObjectArenaAllocator arena;
+
+        alloc.ctx = &alloc_padding;
+        alloc.malloc = my_malloc;
+        alloc.realloc = my_realloc;
+        alloc.free = my_free;
+
+        PyMem_SetAllocator(PYMEM_DOMAIN_RAW, &alloc);
+        PyMem_SetAllocator(PYMEM_DOMAIN_MEM, &alloc);
+        /* leave PYMEM_DOMAIN_OBJ unchanged, use pymalloc */
+
+        arena.ctx = &arena_padding;
+        arena.alloc = my_alloc_arena;
+        arena.free = my_free_arena;
+        PyObject_SetArenaAllocator(&arena);
+
+        PyMem_SetupDebugHooks();
+    }
+
+
+Use case 2: Replace Memory Allocators, override pymalloc
+--------------------------------------------------------
+
+If you have a dedicated allocator optimized for allocations of objects
+smaller than 512 bytes with a short lifetime, pymalloc can be overriden
+(replace ``PyObject_Malloc()``).
+
+Dummy example wasting 2 bytes per memory block::
+
+    #include <stdlib.h>
+
+    size_t padding = 2;
+
+    void* my_malloc(void *ctx, size_t size)
+    {
+        int padding = *(int *)ctx;
+        return malloc(size + padding);
+    }
+
+    void* my_realloc(void *ctx, void *ptr, size_t new_size)
+    {
+        int padding = *(int *)ctx;
+        return realloc(ptr, new_size + padding);
+    }
+
+    void my_free(void *ctx, void *ptr)
+    {
+        free(ptr);
+    }
+
+    void setup_custom_allocator(void)
+    {
+        PyMemAllocator alloc;
+        alloc.ctx = &padding;
+        alloc.malloc = my_malloc;
+        alloc.realloc = my_realloc;
+        alloc.free = my_free;
+
+        PyMem_SetAllocator(PYMEM_DOMAIN_RAW, &alloc);
+        PyMem_SetAllocator(PYMEM_DOMAIN_MEM, &alloc);
+        PyMem_SetAllocator(PYMEM_DOMAIN_OBJ, &alloc);
+
+        PyMem_SetupDebugHooks();
+    }
+
+The *pymalloc* arena does not need to be replaced, because it is no more
+used by the new allocator.
+
+
+Use case 3: Setup Hooks On Memory Block Allocators
+--------------------------------------------------
+
+Example to setup hooks on all memory block allocators::
+
+    struct {
+        PyMemAllocator raw;
+        PyMemAllocator mem;
+        PyMemAllocator obj;
+        /* ... */
+    } hook;
+
+    static void* hook_malloc(void *ctx, size_t size)
+    {
+        PyMemAllocator *alloc = (PyMemAllocator *)ctx;
+        void *ptr;
+        /* ... */
+        ptr = alloc->malloc(alloc->ctx, size);
+        /* ... */
+        return ptr;
+    }
+
+    static void* hook_realloc(void *ctx, void *ptr, size_t new_size)
+    {
+        PyMemAllocator *alloc = (PyMemAllocator *)ctx;
+        void *ptr2;
+        /* ... */
+        ptr2 = alloc->realloc(alloc->ctx, ptr, new_size);
+        /* ... */
+        return ptr2;
+    }
+
+    static void hook_free(void *ctx, void *ptr)
+    {
+        PyMemAllocator *alloc = (PyMemAllocator *)ctx;
+        /* ... */
+        alloc->free(alloc->ctx, ptr);
+        /* ... */
+    }
+
+    void setup_hooks(void)
+    {
+        PyMemAllocator alloc;
+        static int installed = 0;
+
+        if (installed)
+            return;
+        installed = 1;
+
+        alloc.malloc = hook_malloc;
+        alloc.realloc = hook_realloc;
+        alloc.free = hook_free;
+        PyMem_GetAllocator(PYMEM_DOMAIN_RAW, &hook.raw);
+        PyMem_GetAllocator(PYMEM_DOMAIN_MEM, &hook.mem);
+        PyMem_GetAllocator(PYMEM_DOMAIN_OBJ, &hook.obj);
+
+        alloc.ctx = &hook.raw;
+        PyMem_SetAllocator(PYMEM_DOMAIN_RAW, &alloc);
+
+        alloc.ctx = &hook.mem;
+        PyMem_SetAllocator(PYMEM_DOMAIN_MEM, &alloc);
+
+        alloc.ctx = &hook.obj;
+        PyMem_SetAllocator(PYMEM_DOMAIN_OBJ, &alloc);
+    }
+
+.. note::
+   ``PyMem_SetupDebugHooks()`` does not need to be called because
+   memory allocator are not replaced: the debug checks on memory
+   block allocators are installed automatically at startup.
+
+
+Performances
+============
+
+The implementation of this PEP (issue #3329) has no visible overhead on
+the Python benchmark suite.
+
+Results of the `Python benchmarks suite
+<http://hg.python.org/benchmarks>`_ (-b 2n3): some tests are 1.04x
+faster, some tests are 1.04 slower. Results of pybench microbenchmark:
+"+0.1%" slower globally (diff between -4.9% and +5.6%).
+
+The full output of benchmarks is attached to the issue #3329.
+
+
+Rejected Alternatives
+=====================
+
+More specific functions to get/set memory allocators
+----------------------------------------------------
+
+It was originally proposed a larger set of C API functions, with one pair
+of functions for each allocator domain:
+
+* ``void PyMem_GetRawAllocator(PyMemAllocator *allocator)``
+* ``void PyMem_GetAllocator(PyMemAllocator *allocator)``
+* ``void PyObject_GetAllocator(PyMemAllocator *allocator)``
+* ``void PyMem_SetRawAllocator(PyMemAllocator *allocator)``
+* ``void PyMem_SetAllocator(PyMemAllocator *allocator)``
+* ``void PyObject_SetAllocator(PyMemAllocator *allocator)``
+
+This alternative was rejected because it is not possible to write
+generic code with more specific functions: code must be duplicated for
+each memory allocator domain.
+
+
+Make PyMem_Malloc() reuse PyMem_RawMalloc() by default
+------------------------------------------------------
+
+If ``PyMem_Malloc()`` called ``PyMem_RawMalloc()`` by default,
+calling ``PyMem_SetAllocator(PYMEM_DOMAIN_RAW, alloc)`` would also
+patch ``PyMem_Malloc()`` indirectly.
+
+This alternative was rejected because ``PyMem_SetAllocator()`` would
+have a different behaviour depending on the domain. Always having the
+same behaviour is less error-prone.
+
+
+Add a new PYDEBUGMALLOC environment variable
+--------------------------------------------
+
+It was proposed to add a new ``PYDEBUGMALLOC`` environment variable to
+enable debug checks on memory block allocators. It would have had the same
+effect as calling the ``PyMem_SetupDebugHooks()``, without the need
+to write any C code.  Another advantage is to allow to enable debug checks
+even in release mode: debug checks would always be compiled in, but only
+enabled when the environment variable is present and non-empty.
+
+This alternative was rejected because a new environment variable would
+make Python initialization even more complex. `PEP 432
+<http://www.python.org/dev/peps/pep-0432/>`_ tries to simplify the
+CPython startup sequence.
+
+
+Use macros to get customizable allocators
+-----------------------------------------
+
+To have no overhead in the default configuration, customizable
+allocators would be an optional feature enabled by a configuration
+option or by macros.
+
+This alternative was rejected because the use of macros implies having
+to recompile extensions modules to use the new allocator and allocator
+hooks. Not having to recompile Python nor extension modules makes debug
+hooks easier to use in practice.
+
+
+Pass the C filename and line number
+-----------------------------------
+
+Define allocator functions as macros using ``__FILE__`` and ``__LINE__``
+to get the C filename and line number of a memory allocation.
+
+Example of ``PyMem_Malloc`` macro with the modified
+``PyMemAllocator`` structure::
+
+    typedef struct {
+        /* user context passed as the first argument
+           to the 3 functions */
+        void *ctx;
+
+        /* allocate a memory block */
+        void* (*malloc) (void *ctx, const char *filename, int lineno,
+                         size_t size);
+
+        /* allocate or resize a memory block */
+        void* (*realloc) (void *ctx, const char *filename, int lineno,
+                          void *ptr, size_t new_size);
+
+        /* release a memory block */
+        void (*free) (void *ctx, const char *filename, int lineno,
+                      void *ptr);
+    } PyMemAllocator;
+
+    void* _PyMem_MallocTrace(const char *filename, int lineno,
+                             size_t size);
+
+    /* the function is still needed for the Python stable ABI */
+    void* PyMem_Malloc(size_t size);
+
+    #define PyMem_Malloc(size) \
+            _PyMem_MallocTrace(__FILE__, __LINE__, size)
+
+The GC allocator functions would also have to be patched. For example,
+``_PyObject_GC_Malloc()`` is used in many C functions and so objects of
+different types would have the same allocation location.
+
+This alternative was rejected because passing a filename and a line
+number to each allocator makes the API more complex: pass 3 new
+arguments (ctx, filename, lineno) to each allocator function, instead of
+just a context argument (ctx). Having to also modify GC allocator
+functions adds too much complexity for a little gain.
+
+
+GIL-free PyMem_Malloc()
+-----------------------
+
+In Python 3.3, when Python is compiled in debug mode, ``PyMem_Malloc()``
+indirectly calls ``PyObject_Malloc()`` which requires the GIL to be
+held (it isn't thread-safe).  That's why ``PyMem_Malloc()`` must be called
+with the GIL held.
+
+This PEP changes ``PyMem_Malloc()``: it now always calls ``malloc()``
+rather than ``PyObject_Malloc()``.  The "GIL must be held" restriction
+could therefore be removed from ``PyMem_Malloc()``.
+
+This alternative was rejected because allowing to call
+``PyMem_Malloc()`` without holding the GIL can break applications
+which setup their own allocators or allocator hooks.  Holding the GIL is
+convenient to develop a custom allocator: no need to care about other
+threads.  It is also convenient for a debug allocator hook: Python
+objects can be safely inspected, and the C API may be used for reporting.
+
+Moreover, calling ``PyGILState_Ensure()`` in a memory allocator has
+unexpected behaviour, especially at Python startup and when creating of a
+new Python thread state.  It is better to free custom allocators of
+the responsibility of acquiring the GIL.
+
+
+Don't add PyMem_RawMalloc()
+---------------------------
+
+Replace ``malloc()`` with ``PyMem_Malloc()``, but only if the GIL is
+held.  Otherwise, keep ``malloc()`` unchanged.
+
+The ``PyMem_Malloc()`` is used without the GIL held in some Python
+functions.  For example, the ``main()`` and ``Py_Main()`` functions of
+Python call ``PyMem_Malloc()`` whereas the GIL do not exist yet. In this
+case, ``PyMem_Malloc()`` would be replaced with ``malloc()`` (or
+``PyMem_RawMalloc()``).
+
+This alternative was rejected because ``PyMem_RawMalloc()`` is required
+for accurate reports of the memory usage. When a debug hook is used to
+track the memory usage, the memory allocated by direct calls to
+``malloc()`` cannot be tracked. ``PyMem_RawMalloc()`` can be hooked and
+so all the memory allocated by Python can be tracked, including
+memory allocated without holding the GIL.
+
+
+Use existing debug tools to analyze memory use
+----------------------------------------------
+
+There are many existing debug tools to analyze memory use. Some
+examples: `Valgrind <http://valgrind.org/>`_, `Purify
+<http://ibm.com/software/awdtools/purify/>`_, `Clang AddressSanitizer
+<http://code.google.com/p/address-sanitizer/>`_, `failmalloc
+<http://www.nongnu.org/failmalloc/>`_, etc.
+
+The problem is to retrieve the Python object related to a memory pointer
+to read its type and/or its content. Another issue is to retrieve the
+source of the memory allocation: the C backtrace is usually useless
+(same reasoning than macros using ``__FILE__`` and ``__LINE__``, see
+`Pass the C filename and line number`_), the Python filename and line
+number (or even the Python traceback) is more useful.
+
+This alternative was rejected because classic tools are unable to
+introspect Python internals to collect such information. Being able to
+setup a hook on allocators called with the GIL held allows to collect a
+lot of useful data from Python internals.
+
+
+Add a msize() function
+----------------------
+
+Add another function to ``PyMemAllocator`` and
+``PyObjectArenaAllocator`` structures::
+
+    size_t msize(void *ptr);
+
+This function returns the size of a memory block or a memory mapping.
+Return (size_t)-1 if the function is not implemented or if the pointer
+is unknown (ex: NULL pointer).
+
+On Windows, this function can be implemented using ``_msize()`` and
+``VirtualQuery()``.
+
+The function can be used to implement a hook tracking the memory usage.
+The ``free()`` method of an allocator only gets the address of a memory
+block, whereas the size of the memory block is required to update the
+memory usage.
+
+The additional ``msize()`` function was rejected because only few
+platforms implement it. For example, Linux with the GNU libc does not
+provide a function to get the size of a memory block. ``msize()`` is not
+currently used in the Python source code. The function would only be
+used to track memory use, and make the API more complex. A debug hook
+can implement the function internally, there is no need to add it to
+``PyMemAllocator`` and ``PyObjectArenaAllocator`` structures.
+
+
+No context argument
+-------------------
+
+Simplify the signature of allocator functions, remove the context
+argument:
+
+* ``void* malloc(size_t size)``
+* ``void* realloc(void *ptr, size_t new_size)``
+* ``void free(void *ptr)``
+
+It is likely for an allocator hook to be reused for
+``PyMem_SetAllocator()`` and ``PyObject_SetAllocator()``, or even
+``PyMem_SetRawAllocator()``, but the hook must call a different function
+depending on the allocator. The context is a convenient way to reuse the
+same custom allocator or hook for different Python allocators.
+
+In C++, the context can be used to pass *this*.
+
+
+External Libraries
+==================
+
+Examples of API used to customize memory allocators.
+
+Libraries used by Python:
+
+* OpenSSL: `CRYPTO_set_mem_functions()
+  <http://git.openssl.org/gitweb/?p=openssl.git;a=blob;f=crypto/mem.c;h=f7984fa958eb1edd6c61f6667f3f2b29753be662;hb=HEAD#l124>`_
+  to set memory management functions globally
+* expat: `parserCreate()
+  <http://hg.python.org/cpython/file/cc27d50bd91a/Modules/expat/xmlparse.c#l724>`_
+  has a per-instance memory handler
+* zlib: `zlib 1.2.8 Manual <http://www.zlib.net/manual.html#Usage>`_,
+  pass an opaque pointer
+* bz2: `bzip2 and libbzip2, version 1.0.5
+  <http://www.bzip.org/1.0.5/bzip2-manual-1.0.5.html>`_,
+  pass an opaque pointer
+* lzma: `LZMA SDK - How to Use
+  <http://www.asawicki.info/news_1368_lzma_sdk_-_how_to_use.html>`_,
+  pass an opaque pointer
+* lipmpdec: no opaque pointer (classic malloc API)
+
+Other libraries:
+
+* glib: `g_mem_set_vtable()
+  <http://developer.gnome.org/glib/unstable/glib-Memory-Allocation.html#g-mem-set-vtable>`_
+* libxml2:
+  `xmlGcMemSetup() <http://xmlsoft.org/html/libxml-xmlmemory.html>`_,
+  global
+* Oracle's OCI: `Oracle Call Interface Programmer's Guide,
+  Release 2 (9.2)
+  <http://docs.oracle.com/cd/B10501_01/appdev.920/a96584/oci15re4.htm>`_,
+  pass an opaque pointer
+
+The new *ctx* parameter of this PEP was inspired by the API of zlib and
+Oracle's OCI libraries.
+
+See also the `GNU libc: Memory Allocation Hooks
+<http://www.gnu.org/software/libc/manual/html_node/Hooks-for-Malloc.html>`_
+which uses a different approach to hook memory allocators.
+
+
+Memory Allocators
+=================
+
+The C standard library provides the well known ``malloc()`` function.
+Its implementation depends on the platform and of the C library. The GNU
+C library uses a modified ptmalloc2, based on "Doug Lea's Malloc"
+(dlmalloc). FreeBSD uses `jemalloc
+<http://www.canonware.com/jemalloc/>`_. Google provides *tcmalloc* which
+is part of `gperftools <http://code.google.com/p/gperftools/>`_.
+
+``malloc()`` uses two kinds of memory: heap and memory mappings. Memory
+mappings are usually used for large allocations (ex: larger than 256
+KB), whereas the heap is used for small allocations.
+
+On UNIX, the heap is handled by ``brk()`` and ``sbrk()`` system calls,
+and it is contiguous.  On Windows, the heap is handled by
+``HeapAlloc()`` and can be discontiguous. Memory mappings are handled by
+``mmap()`` on UNIX and ``VirtualAlloc()`` on Windows, they can be
+discontiguous.
+
+Releasing a memory mapping gives back immediatly the memory to the
+system. On UNIX, the heap memory is only given back to the system if the
+released block is located at the end of the heap. Otherwise, the memory
+will only be given back to the system when all the memory located after
+the released memory is also released.
+
+To allocate memory on the heap, an allocator tries to reuse free space.
+If there is no contiguous space big enough, the heap must be enlarged,
+even if there is more free space than required size.  This issue is
+called the "memory fragmentation": the memory usage seen by the system
+is higher than real usage. On Windows, ``HeapAlloc()`` creates
+a new memory mapping with ``VirtualAlloc()`` if there is not enough free
+contiguous memory.
+
+CPython has a *pymalloc* allocator for allocations smaller than 512
+bytes. This allocator is optimized for small objects with a short
+lifetime. It uses memory mappings called "arenas" with a fixed size of
+256 KB.
+
+Other allocators:
+
+* Windows provides a `Low-fragmentation Heap
+  <http://msdn.microsoft.com/en-us/library/windows/desktop/aa366750%28v=vs.85%29.aspx>`_.
+
+* The Linux kernel uses `slab allocation
+  <http://en.wikipedia.org/wiki/Slab_allocation>`_.
+
+* The glib library has a `Memory Slice API
+  <https://developer.gnome.org/glib/unstable/glib-Memory-Slices.html>`_:
+  efficient way to allocate groups of equal-sized chunks of memory
+
+This PEP allows to choose exactly which memory allocator is used for your
+application depending on its usage of the memory (number of allocations,
+size of allocations, lifetime of objects, etc.).
+
+
+Links
+=====
+
+CPython issues related to memory allocation:
+
+* `Issue #3329: Add new APIs to customize memory allocators
+  <http://bugs.python.org/issue3329>`_
+* `Issue #13483: Use VirtualAlloc to allocate memory arenas
+  <http://bugs.python.org/issue13483>`_
+* `Issue #16742: PyOS_Readline drops GIL and calls PyOS_StdioReadline,
+  which isn't thread safe <http://bugs.python.org/issue16742>`_
+* `Issue #18203: Replace calls to malloc() with PyMem_Malloc() or
+  PyMem_RawMalloc() <http://bugs.python.org/issue18203>`_
+* `Issue #18227: Use Python memory allocators in external libraries like
+  zlib or OpenSSL <http://bugs.python.org/issue18227>`_
+
+Projects analyzing the memory usage of Python applications:
+
+* `pytracemalloc
+  <https://pypi.python.org/pypi/pytracemalloc>`_
+* `Meliae: Python Memory Usage Analyzer
+  <https://pypi.python.org/pypi/meliae>`_
+* `Guppy-PE: umbrella package combining Heapy and GSL
+  <http://guppy-pe.sourceforge.net/>`_
+* `PySizer (developed for Python 2.4)
+  <http://pysizer.8325.org/>`_
+
+
+Copyright
+=========
+
+This document has been placed into the public domain.
+
diff --git a/pep-0446.txt b/pep-0446.txt
new file mode 100644
--- /dev/null
+++ b/pep-0446.txt
@@ -0,0 +1,248 @@
+PEP: 446
+Title: Add new parameters to configure the inheritance of files and for non-blocking sockets
+Version: $Revision$
+Last-Modified: $Date$
+Author: Victor Stinner <victor.stinner at gmail.com>
+Status: Draft
+Type: Standards Track
+Content-Type: text/x-rst
+Created: 3-July-2013
+Python-Version: 3.4
+
+
+Abstract
+========
+
+This PEP proposes new portable parameters and functions to configure the
+inheritance of file descriptors and the non-blocking flag of sockets.
+
+
+Rationale
+=========
+
+Inheritance of file descriptors
+-------------------------------
+
+The inheritance of file descriptors in child processes can be configured
+on each file descriptor using a *close-on-exec* flag. By default, the
+close-on-exec flag is not set.
+
+On Windows, the close-on-exec flag is the inverse of ``HANDLE_FLAG_INHERIT``. File
+descriptors are not inherited if the ``bInheritHandles`` parameter of
+the ``CreateProcess()`` function is ``FALSE``, even if the
+``HANDLE_FLAG_INHERIT`` flag is set. If ``bInheritHandles`` is ``TRUE``,
+only file descriptors with ``HANDLE_FLAG_INHERIT`` flag set are
+inherited, others are not.
+
+On UNIX, the close-on-exec flag is ``O_CLOEXEC``. File descriptors with
+the ``O_CLOEXEC`` flag set are closed at the execution of a new program
+(ex: when calling ``execv()``).
+
+The ``O_CLOEXEC`` flag has no effect on ``fork()``, all file descriptors
+are inherited by the child process. Futhermore, most properties file
+descriptors are shared between the parent and the child processes,
+except file attributes which are duplicated (``O_CLOEXEC`` is the only
+file attribute).  Setting ``O_CLOEXEC`` flag of a file descriptor in the
+child process does not change the ``O_CLOEXEC`` flag of the file
+descriptor in the parent process.
+
+
+Issues of the inheritance of file descriptors
+---------------------------------------------
+
+Inheritance of file descriptors causes issues. For example, closing a
+file descriptor in the parent process does not release the resource
+(file, socket, ...), because the file descriptor is still open in the
+child process.
+
+Leaking file descriptors is also a major security vulnerability. An
+untrusted child process can read sensitive data like passwords and take
+control of the parent process though leaked file descriptors. It is for
+example a known vulnerability to escape from a chroot.
+
+
+Non-blocking sockets
+--------------------
+
+To handle multiple network clients in a single thread, a multiplexing
+function like ``select()`` can be used. For best performances, sockets
+must be configured as non-blocking. Operations like ``send()`` and
+``recv()`` return an ``EAGAIN`` or ``EWOULDBLOCK`` error if the
+operation would block.
+
+By default, newly created sockets are blocking. Setting the non-blocking
+mode requires additional system calls.
+
+On UNIX, the blocking flag is ``O_NONBLOCK``: a pipe and a socket are
+non-blocking if the ``O_NONBLOCK`` flag is set.
+
+
+Setting flags at the creation of the file descriptor
+----------------------------------------------------
+
+Windows and recent versions of other operating systems like Linux
+support setting the close-on-exec flag directly at the creation of file
+descriptors, and close-on-exec and blocking flags at the creation of
+sockets.
+
+Setting these flags at the creation is atomic and avoids additional
+system calls.
+
+
+Proposal
+========
+
+New cloexec And blocking Parameters
+-----------------------------------
+
+Add a new optional *cloexec* on functions creating file descriptors:
+
+* ``io.FileIO``
+* ``io.open()``
+* ``open()``
+* ``os.dup()``
+* ``os.dup2()``
+* ``os.fdopen()``
+* ``os.open()``
+* ``os.openpty()``
+* ``os.pipe()``
+* ``select.devpoll()``
+* ``select.epoll()``
+* ``select.kqueue()``
+
+Add new optional *cloexec* and *blocking* parameters to functions
+creating sockets:
+
+* ``asyncore.dispatcher.create_socket()``
+* ``socket.socket()``
+* ``socket.socket.accept()``
+* ``socket.socket.dup()``
+* ``socket.socket.fromfd``
+* ``socket.socketpair()``
+
+The default value of *cloexec* is ``False`` and the default value of
+*blocking* is ``True``.
+
+The atomicity is not guaranteed. If the platform does not support
+setting close-on-exec and blocking flags at the creation of the file
+descriptor or socket, the flags are set using additional system calls.
+
+
+New Functions
+-------------
+
+Add new functions the get and set the close-on-exec flag of a file
+descriptor, available on all platforms:
+
+* ``os.get_cloexec(fd:int) -> bool``
+* ``os.set_cloexec(fd:int, cloexec: bool)``
+
+Add new functions the get and set the blocking flag of a file
+descriptor, only available on UNIX:
+
+* ``os.get_blocking(fd:int) -> bool``
+* ``os.set_blocking(fd:int, blocking: bool)``
+
+
+Other Changes
+-------------
+
+The ``subprocess.Popen`` class must clear the close-on-exec flag of file
+descriptors of the ``pass_fds`` parameter. The flag is cleared in the
+child process before executing the program; the change does not change
+the flag in the parent process.
+
+The close-on-exec flag must also be set on private file descriptors and
+sockets in the Python standard library. For example, on UNIX,
+os.urandom() opens ``/dev/urandom`` to read some random bytes and the
+file descriptor is closed at function exit. The file descriptor is not
+expected to be inherited by child processes.
+
+
+Rejected Alternatives
+=====================
+
+PEP 433
+-------
+
+The PEP 433 entitled "Easier suppression of file descriptor inheritance"
+is a previous attempt proposing various other alternatives, but no
+consensus could be reached.
+
+This PEP has a well defined behaviour (the default value of the new
+*cloexec* parameter is not configurable), is more conservative (no
+backward compatibility issue), and is much simpler.
+
+
+Add blocking parameter for file descriptors and use Windows overlapped I/O
+--------------------------------------------------------------------------
+
+Windows supports non-blocking operations on files using an extension of
+the Windows API called "Overlapped I/O". Using this extension requires
+to modify the Python standard library and applications to pass a
+``OVERLAPPED`` structure and an event loop to wait for the completion of
+operations.
+
+This PEP only tries to expose portable flags on file descriptors and
+sockets. Supporting overlapped I/O requires an abstraction providing a
+high-level and portable API for asynchronous operations on files and
+sockets. Overlapped I/O are out of the scope of this PEP.
+
+UNIX supports non-blocking files, moreover recent versions of operating
+systems support setting the non-blocking flag at the creation of a file
+descriptor. It would be possible to add a new optional *blocking*
+parameter to Python functions creating file descriptors. On Windows,
+creating a file descriptor with ``blocking=False``  would raise a
+``NotImplementedError``. This behaviour is not acceptable for the ``os``
+module which is designed as a thin wrapper on the C functions of the
+operating system. If a platform does not support a function, the
+function should not be available on the platform. For example,
+the ``os.fork()`` function is not available on Windows.
+
+UNIX has more flag on file descriptors: ``O_DSYNC``, ``O_SYNC``,
+``O_DIRECT``, etc.  Adding all these flags complicates the signature and
+the implementation of functions creating file descriptor like open().
+Moreover, these flags do not work on any file type, and are not
+portable.
+
+For all these reasons, this alternative was rejected. The PEP 3156
+proposes an abstraction for asynchronous I/O supporting non-blocking
+files on Windows.
+
+
+Links
+=====
+
+Python issues:
+
+* `#10115: Support accept4() for atomic setting of flags at socket
+  creation <http://bugs.python.org/issue10115>`_
+* `#12105: open() does not able to set flags, such as O_CLOEXEC
+  <http://bugs.python.org/issue12105>`_
+* `#12107: TCP listening sockets created without FD_CLOEXEC flag
+  <http://bugs.python.org/issue12107>`_
+* `#16850: Add "e" mode to open(): close-and-exec
+  (O_CLOEXEC) / O_NOINHERIT <http://bugs.python.org/issue16850>`_
+* `#16860: Use O_CLOEXEC in the tempfile module
+  <http://bugs.python.org/issue16860>`_
+* `#16946: subprocess: _close_open_fd_range_safe() does not set
+  close-on-exec flag on Linux < 2.6.23 if O_CLOEXEC is defined
+  <http://bugs.python.org/issue16946>`_
+* `#17070: Use the new cloexec to improve security and avoid bugs
+  <http://bugs.python.org/issue17070>`_
+
+Other links:
+
+* `Secure File Descriptor Handling
+  <http://udrepper.livejournal.com/20407.html>`_ (Ulrich Drepper,
+  2008)
+* `Ghosts of Unix past, part 2: Conflated designs
+  <http://lwn.net/Articles/412131/>`_ (Neil Brown, 2010) explains the
+  history of ``O_CLOEXEC`` and ``O_NONBLOCK`` flags
+
+
+Copyright
+=========
+
+This document has been placed into the public domain.
+
diff --git a/pep-0447.txt b/pep-0447.txt
new file mode 100644
--- /dev/null
+++ b/pep-0447.txt
@@ -0,0 +1,408 @@
+PEP: 447
+Title: Add __locallookup__ method to metaclass
+Version: $Revision$
+Last-Modified: $Date$
+Author: Ronald Oussoren <ronaldoussoren at mac.com>
+Status: Draft
+Type: Standards Track
+Content-Type: text/x-rst
+Created: 12-Jun-2013
+Post-History: 2-Jul-2013, 15-Jul-2013, 29-Jul-2013
+
+
+Abstract
+========
+
+Currently ``object.__getattribute__`` and ``super.__getattribute__`` peek
+in the ``__dict__`` of classes on the MRO for a class when looking for
+an attribute. This PEP adds an optional ``__locallookup__`` method to
+a metaclass that can be used to override this behavior.
+
+Rationale
+=========
+
+It is currently not possible to influence how the `super class`_ looks
+up attributes (that is, ``super.__getattribute__`` unconditionally
+peeks in the class ``__dict__``), and that can be problematic for
+dynamic classes that can grow new methods on demand.
+
+The ``__locallookup__`` method makes it possible to dynamicly add
+attributes even when looking them up using the `super class`_.
+
+The new method affects ``object.__getattribute__`` (and
+`PyObject_GenericGetAttr`_) as well for consistency.
+
+Background
+----------
+
+The current behavior of ``super.__getattribute__`` causes problems for
+classes that are dynamic proxies for other (non-Python) classes or types,
+an example of which is `PyObjC`_. PyObjC creates a Python class for every
+class in the Objective-C runtime, and looks up methods in the Objective-C
+runtime when they are used. This works fine for normal access, but doesn't
+work for access with ``super`` objects. Because of this PyObjC currently
+includes a custom ``super`` that must be used with its classes.
+
+The API in this PEP makes it possible to remove the custom ``super`` and
+simplifies the implementation because the custom lookup behavior can be
+added in a central location.
+
+
+The superclass attribute lookup hook
+====================================
+
+Both ``super.__getattribute__`` and ``object.__getattribute__`` (or
+`PyObject_GenericGetAttr`_ in C code) walk an object's MRO and peek in the
+class' ``__dict__`` to look up attributes. A way to affect this lookup is
+using a method on the meta class for the type, that by default looks up
+the name in the class ``__dict__``.
+
+In Python code
+--------------
+
+A meta type can define a method ``__locallookup__`` that is called during
+attribute resolution by both ``super.__getattribute__`` and ``object.__getattribute``::
+
+    class MetaType(type):
+        def __locallookup__(cls, name):
+            try:
+                return cls.__dict__[name]
+            except KeyError:
+                raise AttributeError(name) from None
+
+The ``__locallookup__`` method has as its arguments a class and the name of the attribute
+that is looked up. It should return the value of the attribute without invoking descriptors,
+or raise `AttributeError`_ when the name cannot be found.
+
+The `type`_ class provides a default implementation for ``__locallookup__``, that
+looks up the name in the class dictionary.
+
+Example usage
+.............
+
+The code below implements a silly metaclass that redirects attribute lookup to uppercase
+versions of names::
+
+    class UpperCaseAccess (type):
+        def __locallookup__(cls, name):
+	    return cls.__dict__[name.upper()]
+
+    class SillyObject (metaclass=UpperCaseAccess):
+        def m(self):
+	    return 42
+
+	def M(self):
+	    return "fourtytwo"
+
+    obj = SillyObject()
+    assert obj.m() == "fortytwo"
+
+
+In C code
+---------
+
+A new slot ``tp_locallookup`` is added to the ``PyTypeObject`` struct, this slot
+corresponds to the ``__locallookup__`` method on `type`_.
+
+The slot has the following prototype::
+
+    PyObject* (*locallookupfunc)(PyTypeObject* cls, PyObject* name);
+
+This method should lookup *name* in the namespace of *cls*, without looking at superclasses,
+and should not invoke descriptors. The method returns ``NULL`` without setting an exception
+when the *name* cannot be found, and returns a new reference otherwise (not a borrowed reference).
+
+Use of this hook by the interpreter
+-----------------------------------
+
+The new method is required for metatypes and as such is defined on `type_`.  Both
+``super.__getattribute__`` and ``object.__getattribute__``/`PyObject_GenericGetAttr`_
+(through ``_PyType_Lookup``) use the this ``__locallookup__`` method when walking
+the MRO.
+
+Other changes to the implementation
+-----------------------------------
+
+The change for `PyObject_GenericGetAttr`_ will be done by changing the private function
+``_PyType_Lookup``. This currently returns a borrowed reference, but must return a new
+reference when the ``__locallookup__`` method is present. Because of this ``_PyType_Lookup``
+will be renamed to ``_PyType_LookupName``, this will cause compile-time errors for all out-of-tree
+users of this private API.
+
+The attribute lookup cache in ``Objects/typeobject.c`` is disabled for classes that have a
+metaclass that overrides ``__locallookup__``, because using the cache might not be valid
+for such classes.
+
+Performance impact
+------------------
+
+The pybench output below compares an implementation of this PEP with the regular
+source tree, both based on changeset a5681f50bae2, run on an idle machine an
+Core i7 processor running Centos 6.4.
+
+Even though the machine was idle there were clear differences between runs,
+I've seen difference in "minimum time" vary from -0.1% to +1.5%, with simular
+(but slightly smaller) differences in the "average time" difference.
+
+::
+
+	-------------------------------------------------------------------------------
+	PYBENCH 2.1
+	-------------------------------------------------------------------------------
+	* using CPython 3.4.0a0 (default, Jul 29 2013, 13:01:34) [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)]
+	* disabled garbage collection
+	* system check interval set to maximum: 2147483647
+	* using timer: time.perf_counter
+	* timer: resolution=1e-09, implementation=clock_gettime(CLOCK_MONOTONIC)
+
+	-------------------------------------------------------------------------------
+	Benchmark: pep447.pybench
+	-------------------------------------------------------------------------------
+
+	    Rounds: 10
+	    Warp:   10
+	    Timer:  time.perf_counter
+
+	    Machine Details:
+	       Platform ID:    Linux-2.6.32-358.114.1.openstack.el6.x86_64-x86_64-with-centos-6.4-Final
+	       Processor:      x86_64
+
+	    Python:
+	       Implementation: CPython
+	       Executable:     /tmp/default-pep447/bin/python3
+	       Version:        3.4.0a0
+	       Compiler:       GCC 4.4.7 20120313 (Red Hat 4.4.7-3)
+	       Bits:           64bit
+	       Build:          Jul 29 2013 14:09:12 (#default)
+	       Unicode:        UCS4
+
+
+	-------------------------------------------------------------------------------
+	Comparing with: default.pybench
+	-------------------------------------------------------------------------------
+
+	    Rounds: 10
+	    Warp:   10
+	    Timer:  time.perf_counter
+
+	    Machine Details:
+	       Platform ID:    Linux-2.6.32-358.114.1.openstack.el6.x86_64-x86_64-with-centos-6.4-Final
+	       Processor:      x86_64
+
+	    Python:
+	       Implementation: CPython
+	       Executable:     /tmp/default/bin/python3
+	       Version:        3.4.0a0
+	       Compiler:       GCC 4.4.7 20120313 (Red Hat 4.4.7-3)
+	       Bits:           64bit
+	       Build:          Jul 29 2013 13:01:34 (#default)
+	       Unicode:        UCS4
+
+
+	Test                             minimum run-time        average  run-time
+					 this    other   diff    this    other   diff
+	-------------------------------------------------------------------------------
+		  BuiltinFunctionCalls:    45ms    44ms   +1.3%    45ms    44ms   +1.3%
+		   BuiltinMethodLookup:    26ms    27ms   -2.4%    27ms    27ms   -2.2%
+			 CompareFloats:    33ms    34ms   -0.7%    33ms    34ms   -1.1%
+		 CompareFloatsIntegers:    66ms    67ms   -0.9%    66ms    67ms   -0.8%
+		       CompareIntegers:    51ms    50ms   +0.9%    51ms    50ms   +0.8%
+		CompareInternedStrings:    34ms    33ms   +0.4%    34ms    34ms   -0.4%
+			  CompareLongs:    29ms    29ms   -0.1%    29ms    29ms   -0.0%
+			CompareStrings:    43ms    44ms   -1.8%    44ms    44ms   -1.8%
+	    ComplexPythonFunctionCalls:    44ms    42ms   +3.9%    44ms    42ms   +4.1%
+			 ConcatStrings:    33ms    33ms   -0.4%    33ms    33ms   -1.0%
+		       CreateInstances:    47ms    48ms   -2.9%    47ms    49ms   -3.4%
+		    CreateNewInstances:    35ms    36ms   -2.5%    36ms    36ms   -2.5%
+	       CreateStringsWithConcat:    69ms    70ms   -0.7%    69ms    70ms   -0.9%
+			  DictCreation:    52ms    50ms   +3.1%    52ms    50ms   +3.0%
+		     DictWithFloatKeys:    40ms    44ms  -10.1%    43ms    45ms   -5.8%
+		   DictWithIntegerKeys:    32ms    36ms  -11.2%    35ms    37ms   -4.6%
+		    DictWithStringKeys:    29ms    34ms  -15.7%    35ms    40ms  -11.0%
+			      ForLoops:    30ms    29ms   +2.2%    30ms    29ms   +2.2%
+			    IfThenElse:    38ms    41ms   -6.7%    38ms    41ms   -6.9%
+			   ListSlicing:    36ms    36ms   -0.7%    36ms    37ms   -1.3%
+			NestedForLoops:    43ms    45ms   -3.1%    43ms    45ms   -3.2%
+	      NestedListComprehensions:    39ms    40ms   -1.7%    39ms    40ms   -2.1%
+		  NormalClassAttribute:    86ms    82ms   +5.1%    86ms    82ms   +5.0%
+	       NormalInstanceAttribute:    42ms    42ms   +0.3%    42ms    42ms   +0.0%
+		   PythonFunctionCalls:    39ms    38ms   +3.5%    39ms    38ms   +2.8%
+		     PythonMethodCalls:    51ms    49ms   +3.0%    51ms    50ms   +2.8%
+			     Recursion:    67ms    68ms   -1.4%    67ms    68ms   -1.4%
+			  SecondImport:    41ms    36ms  +12.5%    41ms    36ms  +12.6%
+		   SecondPackageImport:    45ms    40ms  +13.1%    45ms    40ms  +13.2%
+		 SecondSubmoduleImport:    92ms    95ms   -2.4%    95ms    98ms   -3.6%
+	       SimpleComplexArithmetic:    28ms    28ms   -0.1%    28ms    28ms   -0.2%
+		SimpleDictManipulation:    57ms    57ms   -1.0%    57ms    58ms   -1.0%
+		 SimpleFloatArithmetic:    29ms    28ms   +4.7%    29ms    28ms   +4.9%
+	      SimpleIntFloatArithmetic:    37ms    41ms   -8.5%    37ms    41ms   -8.7%
+	       SimpleIntegerArithmetic:    37ms    41ms   -9.4%    37ms    42ms  -10.2%
+	      SimpleListComprehensions:    33ms    33ms   -1.9%    33ms    34ms   -2.9%
+		SimpleListManipulation:    28ms    30ms   -4.3%    29ms    30ms   -4.1%
+		  SimpleLongArithmetic:    26ms    26ms   +0.5%    26ms    26ms   +0.5%
+			    SmallLists:    40ms    40ms   +0.1%    40ms    40ms   +0.1%
+			   SmallTuples:    46ms    47ms   -2.4%    46ms    48ms   -3.0%
+		 SpecialClassAttribute:   126ms   120ms   +4.7%   126ms   121ms   +4.4%
+	      SpecialInstanceAttribute:    42ms    42ms   +0.6%    42ms    42ms   +0.8%
+			StringMappings:    94ms    91ms   +3.9%    94ms    91ms   +3.8%
+		      StringPredicates:    48ms    49ms   -1.7%    48ms    49ms   -2.1%
+			 StringSlicing:    45ms    45ms   +1.4%    46ms    45ms   +1.5%
+			     TryExcept:    23ms    22ms   +4.9%    23ms    22ms   +4.8%
+			    TryFinally:    32ms    32ms   -0.1%    32ms    32ms   +0.1%
+			TryRaiseExcept:    17ms    17ms   +0.9%    17ms    17ms   +0.5%
+			  TupleSlicing:    49ms    48ms   +1.1%    49ms    49ms   +1.0%
+			   WithFinally:    48ms    47ms   +2.3%    48ms    47ms   +2.4%
+		       WithRaiseExcept:    45ms    44ms   +0.8%    45ms    45ms   +0.5%
+	-------------------------------------------------------------------------------
+	Totals:                          2284ms  2287ms   -0.1%  2306ms  2308ms   -0.1%
+
+	(this=pep447.pybench, other=default.pybench)
+
+
+A run of the benchmark suite (with option "-b 2n3") also seems to indicate that
+the performance impact is minimal::
+
+	Report on Linux fangorn.local 2.6.32-358.114.1.openstack.el6.x86_64 #1 SMP Wed Jul 3 02:11:25 EDT 2013 x86_64 x86_64
+	Total CPU cores: 8
+
+	### call_method_slots ###
+	Min: 0.304120 -> 0.282791: 1.08x faster
+	Avg: 0.304394 -> 0.282906: 1.08x faster
+	Significant (t=2329.92)
+	Stddev: 0.00016 -> 0.00004: 4.1814x smaller
+
+	### call_simple ###
+	Min: 0.249268 -> 0.221175: 1.13x faster
+	Avg: 0.249789 -> 0.221387: 1.13x faster
+	Significant (t=2770.11)
+	Stddev: 0.00012 -> 0.00013: 1.1101x larger
+
+	### django_v2 ###
+	Min: 0.632590 -> 0.601519: 1.05x faster
+	Avg: 0.635085 -> 0.602653: 1.05x faster
+	Significant (t=321.32)
+	Stddev: 0.00087 -> 0.00051: 1.6933x smaller
+
+	### fannkuch ###
+	Min: 1.033181 -> 0.999779: 1.03x faster
+	Avg: 1.036457 -> 1.001840: 1.03x faster
+	Significant (t=260.31)
+	Stddev: 0.00113 -> 0.00070: 1.6112x smaller
+
+	### go ###
+	Min: 0.526714 -> 0.544428: 1.03x slower
+	Avg: 0.529649 -> 0.547626: 1.03x slower
+	Significant (t=-93.32)
+	Stddev: 0.00136 -> 0.00136: 1.0028x smaller
+
+	### iterative_count ###
+	Min: 0.109748 -> 0.116513: 1.06x slower
+	Avg: 0.109816 -> 0.117202: 1.07x slower
+	Significant (t=-357.08)
+	Stddev: 0.00008 -> 0.00019: 2.3664x larger
+
+	### json_dump_v2 ###
+	Min: 2.554462 -> 2.609141: 1.02x slower
+	Avg: 2.564472 -> 2.620013: 1.02x slower
+	Significant (t=-76.93)
+	Stddev: 0.00538 -> 0.00481: 1.1194x smaller
+
+	### meteor_contest ###
+	Min: 0.196336 -> 0.191925: 1.02x faster
+	Avg: 0.196878 -> 0.192698: 1.02x faster
+	Significant (t=61.86)
+	Stddev: 0.00053 -> 0.00041: 1.2925x smaller
+
+	### nbody ###
+	Min: 0.228039 -> 0.235551: 1.03x slower
+	Avg: 0.228857 -> 0.236052: 1.03x slower
+	Significant (t=-54.15)
+	Stddev: 0.00130 -> 0.00029: 4.4810x smaller
+
+	### pathlib ###
+	Min: 0.108501 -> 0.105339: 1.03x faster
+	Avg: 0.109084 -> 0.105619: 1.03x faster
+	Significant (t=311.08)
+	Stddev: 0.00022 -> 0.00011: 1.9314x smaller
+
+	### regex_effbot ###
+	Min: 0.057905 -> 0.056447: 1.03x faster
+	Avg: 0.058055 -> 0.056760: 1.02x faster
+	Significant (t=79.22)
+	Stddev: 0.00006 -> 0.00015: 2.7741x larger
+
+	### silent_logging ###
+	Min: 0.070810 -> 0.072436: 1.02x slower
+	Avg: 0.070899 -> 0.072609: 1.02x slower
+	Significant (t=-191.59)
+	Stddev: 0.00004 -> 0.00008: 2.2640x larger
+
+	### spectral_norm ###
+	Min: 0.290255 -> 0.299286: 1.03x slower
+	Avg: 0.290335 -> 0.299541: 1.03x slower
+	Significant (t=-572.10)
+	Stddev: 0.00005 -> 0.00015: 2.8547x larger
+
+	### threaded_count ###
+	Min: 0.107215 -> 0.115206: 1.07x slower
+	Avg: 0.107488 -> 0.115996: 1.08x slower
+	Significant (t=-109.39)
+	Stddev: 0.00016 -> 0.00076: 4.8665x larger
+
+	The following not significant results are hidden, use -v to show them:
+	call_method, call_method_unknown, chaos, fastpickle, fastunpickle, float, formatted_logging, hexiom2, json_load, normal_startup, nqueens, pidigits, raytrace, regex_compile, regex_v8, richards, simple_logging, startup_nosite, telco, unpack_sequence.
+
+
+Alternative proposals
+---------------------
+
+``__getattribute_super__``
+..........................
+
+An earlier version of this PEP used the following static method on classes::
+
+    def __getattribute_super__(cls, name, object, owner): pass
+
+This method performed name lookup as well as invoking descriptors and was necessarily
+limited to working only with ``super.__getattribute__``.
+
+
+Reuse ``tp_getattro``
+.....................
+
+It would be nice to avoid adding a new slot, thus keeping the API simpler and
+easier to understand.  A comment on `Issue 18181`_ asked about reusing the
+``tp_getattro`` slot, that is super could call the ``tp_getattro`` slot of all
+methods along the MRO.
+
+That won't work because ``tp_getattro`` will look in the instance
+``__dict__`` before it tries to resolve attributes using classes in the MRO.
+This would mean that using ``tp_getattro`` instead of peeking the class
+dictionaries changes the semantics of the `super class`_.
+
+
+References
+==========
+
+* `Issue 18181`_ contains a prototype implementation
+
+Copyright
+=========
+
+This document has been placed in the public domain.
+
+.. _`Issue 18181`: http://bugs.python.org/issue18181
+
+.. _`super class`: http://docs.python.org/3/library/functions.html#super
+
+.. _`NotImplemented`: http://docs.python.org/3/library/constants.html#NotImplemented
+
+.. _`PyObject_GenericGetAttr`: http://docs.python.org/3/c-api/object.html#PyObject_GenericGetAttr
+
+.. _`type`: http://docs.python.org/3/library/functions.html#type
+
+.. _`AttributeError`: http://docs.python.org/3/library/exceptions.html#AttributeError
+
+.. _`PyObjC`: http://pyobjc.sourceforge.net/
+
+.. _`classmethod`: http://docs.python.org/3/library/functions.html#classmethod
diff --git a/pep-0448.txt b/pep-0448.txt
new file mode 100644
--- /dev/null
+++ b/pep-0448.txt
@@ -0,0 +1,247 @@
+PEP: 448
+Title: Additional Unpacking Generalizations
+Version: $Revision$
+Last-Modified: $Date$
+Author: Joshua Landau <joshua at landau.ws>
+Discussions-To: python-ideas at python.org
+Status: Draft
+Type: Standards Track
+Content-Type: text/x-rst
+Created: 29-Jun-2013
+Python-Version: 3.4
+Post-History:
+
+
+Abstract
+========
+
+This PEP proposes extended usages of the ``*`` iterable unpacking
+operator to allow unpacking in more positions, an arbitrary number of
+times, and in several additional circumstances.
+
+Specifically:
+
+Arbitrarily positioned unpacking operators::
+
+    >>> print(*[1], *[2], 3)
+    1 2 3
+    >>> dict(**{'x': 1}, y=3, **{'z': 2})
+    {'x': 1, 'y': 2, 'z': 3}
+
+Function calls currently have the restriction that keyword arguments
+must follow positional arguments and ``**`` unpackings must additionally
+follow ``*`` unpackings.  Because of the new levity for ``*`` and ``**``
+unpackings, it may be advisable to lift some or all of these
+restrictions.
+
+As currently, if an argument is given multiple times - such as a
+positional argument given both positionally and by keyword - a
+TypeError is raised.
+
+Unpacking is proposed to be allowed inside tuples, lists, sets,
+dictionaries and comprehensions::
+
+    >>> *range(4), 4
+    (0, 1, 2, 3, 4)
+    >>> [*range(4), 4]
+    [0, 1, 2, 3, 4]
+    >>> {*range(4), 4}
+    {0, 1, 2, 3, 4}
+    >>> {'x': 1, **{'y': 2}}
+    {'x': 1, 'y': 2}
+
+    >>> ranges = [range(i) for i in range(5)]
+    >>> [*item for item in ranges]
+    [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]
+
+
+Rationale
+=========
+
+Current usage of the ``*`` iterable unpacking operator features
+unnecessary restrictions that can harm readability.
+
+Unpacking multiple times has an obvious rationale.  When you want to
+unpack several iterables into a function definition or follow an unpack
+with more positional arguments, the most natural way would be to write::
+
+    function(**kw_arguments, **more_arguments)
+
+    function(*arguments, argument)
+
+Simple examples where this is useful are ``print`` and ``str.format``.
+Instead, you could be forced to write::
+
+    kwargs = dict(kw_arguments)
+    kwargs.update(more_arguments)
+    function(**kwargs)
+
+    args = list(arguments)
+    args.append(arg)
+    function(*args)
+
+or, if you know to do so::
+
+    from collections import ChainMap
+    function(**ChainMap(more_arguments, arguments))
+
+    from itertools import chain
+    function(*chain(args, [arg]))
+
+which add unnecessary line-noise and, with the first methods, causes
+duplication of work.
+
+
+There are two primary rationales for unpacking inside of containers.
+Firstly there is a symmetry of assignment, where ``fst, *other, lst =
+elems`` and ``elems = fst, *other, lst`` are approximate inverses,
+ignoring the specifics of types.  This, in effect, simplifies the
+language by removing special cases.
+
+Secondly, it vastly simplifies types of "addition" such as combining
+dictionaries, and does so in an unambiguous and well-defined way::
+
+    combination = {**first_dictionary, "x": 1, "y": 2}
+
+instead of::
+
+    combination = first_dictionary.copy()
+    combination.update({"x": 1, "y": 2})
+
+which is especially important in contexts where expressions are
+preferred.  This is also useful as a more readable way of summing
+iterables into a list, such as ``my_list + list(my_tuple) +
+list(my_range)`` which is now equivalent to just ``[*my_list,
+*my_tuple, *my_range]``.
+
+
+The addition of unpacking to comprehensions is a logical extension.
+It's usage will primarily be a neat replacement for ``[i for j in
+2D_list for i in j]``, as the more readable ``[*l for l in 2D_list]``.
+Other uses are possible, but expected to occur rarely.
+
+
+Specification
+=============
+
+Function calls may accept an unbound number of ``*`` and ``**``
+unpackings.  There will be no restriction of the order of positional
+arguments with relation to ``*`` unpackings nor any restriction of the
+order of keyword arguments with relation to ``**`` unpackings.
+
+Function calls currently have the restriction that keyword arguments
+must follow positional arguments and ``**`` unpackings must additionally
+follow ``*`` unpackings.  Because of the new levity for ``*`` and ``**``
+unpackings, it may be advisable to list some or all of these
+restrictions.
+
+As currently, if an argument is given multiple times - such as a
+positional argument given both positionally and by keyword - a
+TypeError is raised.
+
+If the restrictions are kept, a function call will look like this::
+
+	function(
+	    argument or *args, argument or *args, ...,
+	    kwargument or *args, kwargument or *args, ...,
+	    kwargument or **kwargs, kwargument or **kwargs, ...
+	)
+
+If they are removed completely, a function call will look like this::
+    
+    function(
+        argument or keyword_argument or *args or **kwargs,
+        argument or keyword_argument or *args or **kwargs,
+        ...
+    )
+
+
+Tuples, lists, sets and dictionaries will allow unpacking.  This will
+act as if the elements from unpacked items were inserted in order at
+the site of unpacking, much as happens in unpacking in a function-call.
+Dictionaries require ``**`` unpacking; all the others require ``*`` unpacking.
+A dictionary's key remain in a right-to-left priority order, so
+``{**{'a': 1}, 'a': 2, **{'a': 3}}`` evaluates to ``{'a': 3}``.  There
+is no restriction on the number or position of unpackings.
+
+Comprehensions, by simple extension, will support unpacking.  As before,
+dictionaries require ``**`` unpacking, all the others require ``*``
+unpacking and key priorities are unchanged.
+
+Examples include::
+
+    {*[1, 2, 3], 4, 5, *{6, 7, 8}}
+
+    (*e for e in [[1], [3, 4, 5], [2]])
+
+    {**dictionary for dictionary in (globals(), locals())}
+
+    {**locals(), "override": None}
+
+
+Disadvantages
+=============
+
+If the current restrictions for function call arguments (keyword
+arguments must follow positional arguments and ``**`` unpackings must
+additionally follow ``*`` unpackings) are kept, the allowable orders
+for arguments in a function call is more complicated than before.
+The simplest explanation for the rules may be "positional arguments
+come first and keyword arguments follow, but ``*`` unpackings are
+allowed after keyword arguments".
+
+If the current restrictions are lifted, there are no obvious gains to
+code as the only new orders that are allowed look silly: ``f(a, e=e,
+d=d, b, c)`` being a simpler example.
+
+
+Whilst ``*elements, = iterable`` causes ``elements`` to be a list,
+``elements = *iterable,`` causes ``elements`` to be a tuple.  The
+reason for this may not be obvious at first glance and may confuse
+people unfamiliar with the construct.
+
+
+Implementation
+==============
+
+An implementation for an old version of Python 3 is found at Issue
+2292 on bug tracker [1]_, although several changes should be made:
+
+- It has yet to be updated to the most recent Python version
+
+- It features a now redundant replacement for "yield from" which
+  should be removed
+
+- It also loses support for calling function with keyword arguments before
+  positional arguments, which is an unnecessary backwards-incompatible change
+
+- If the restrictions on the order of arguments in a function call are
+  partially or fully lifted, they would need to be included
+
+
+References
+==========
+
+.. [1] Issue 2292, "Missing `*`-unpacking generalizations", Thomas Wouters
+   (http://bugs.python.org/issue2292)
+
+.. [2] Discussion on Python-ideas list,
+   "list / array comprehensions extension", Alexander Heger
+   (http://mail.python.org/pipermail/python-ideas/2011-December/013097.html)
+
+
+Copyright
+=========
+
+This document has been placed in the public domain.
+
+
+
+..
+   Local Variables:
+   mode: indented-text
+   indent-tabs-mode: nil
+   sentence-end-double-space: t
+   fill-column: 70
+   coding: utf-8
+   End:
diff --git a/pep-0466/test_cloexec.py b/pep-0466/test_cloexec.py
new file mode 100644
--- /dev/null
+++ b/pep-0466/test_cloexec.py
@@ -0,0 +1,50 @@
+import os, fcntl, sys, errno
+
+def get_cloexec(fd):
+    try:
+        flags = fcntl.fcntl(fd, fcntl.F_GETFD)
+        return bool(flags & fcntl.FD_CLOEXEC)
+    except IOError as err:
+        if err.errno == errno.EBADF:
+            return '<invalid file descriptor>'
+        else:
+            return str(err)
+
+def set_cloexec(fd):
+    flags = fcntl.fcntl(fd, fcntl.F_GETFD)
+    flags |= fcntl.FD_CLOEXEC
+    fcntl.fcntl(fd, fcntl.F_SETFD, flags)
+
+def main():
+    f = open(__file__, "rb")
+    fd = f.fileno()
+    print("initial state: fd=%s, cloexec=%s" % (fd, get_cloexec(fd)))
+
+
+    pid = os.fork()
+    if not pid:
+        set_cloexec(fd)
+        print("child process after fork, set cloexec: cloexec=%s" % get_cloexec(fd))
+        child_argv = [sys.executable, __file__, str(fd),
+                      'child process after exec']
+        os.execv(child_argv[0], child_argv)
+
+    os.waitpid(pid, 0)
+    print("parent process after fork: cloexec=%s" % get_cloexec(fd))
+    child_argv = [sys.executable, __file__, str(fd),
+                  'parent process after exec']
+    os.execv(child_argv[0], child_argv)
+
+def after_exec():
+    fd = int(sys.argv[1])
+    name = sys.argv[2]
+    print("%s: fd=%s, cloexec=%s"
+          % (name, fd, get_cloexec(fd)))
+    sys.exit()
+
+if __name__ == "__main__":
+    if len(sys.argv) == 1:
+        main()
+    else:
+        after_exec()
+
diff --git a/pep-3150.txt b/pep-3150.txt
--- a/pep-3150.txt
+++ b/pep-3150.txt
@@ -19,9 +19,11 @@
 Python statements that do not currently have an associated code suite. This
 clause will create a statement local namespace for additional names that are
 accessible in the associated statement, but do not become part of the
-containing namespace. To permit a sane implementation strategy, forward
-references to names from the ``given`` clause will need to be marked
-explicitly.
+containing namespace.
+
+Adoption of a new symbol, ``?``, is proposed to denote a forward reference
+to the namespace created by running the associated code suite. It will be
+a reference to a ``types.SimpleNamespace`` object.
 
 The primary motivation is to enable a more declarative style of programming,
 where the operation to be performed is presented to the reader first, and the
@@ -72,12 +74,16 @@
 name in the header line, with the actual definitions following in
 the indented clause. As a simple example::
 
-   sorted_data = sorted(data, key=.sort_key) given:
+   sorted_data = sorted(data, key=?.sort_key) given:
        def sort_key(item):
            return item.attr1, item.attr2
 
-The leading ``.`` on ``.sort_key`` indicates to the compiler that this
-is a forward reference to a name defined in the ``given`` clause.
+The new symbol ``?`` is used to refer to the given namespace. It would be a
+``types.SimpleNamespace`` instance, so ``?.sort_key`` functions as
+a forward reference to a name defined in the ``given`` clause.
+
+A docstring would be permitted in the given clause, and would be attached
+to the result namespace as its ``__doc__`` attribute.
 
 The ``pass`` statement is included to provide a consistent way to skip
 inclusion of a meaningful expression in the header line. While this is not
@@ -94,7 +100,7 @@
    # Explicit early binding via given clause
    seq = []
    for i in range(10):
-       seq.append(.f) given i=i:
+       seq.append(.f) given i=i in:
            def f():
                return i
    assert [f() for f in seq] == list(range(10))
@@ -105,7 +111,7 @@
 
 The following statement::
 
-   op(.f, .g) given bound_a=a, bound_b=b:
+   op(?.f, ?.g) given bound_a=a, bound_b=b in:
        def f():
            return bound_a + bound_b
        def g():
@@ -121,9 +127,10 @@
            return bound_a + bound_b
        def g():
            return bound_a - bound_b
-      return f, g
-   __ref1, __ref2 = __scope(__arg1)
-   op(__ref1, __ref2)
+      return types.SimpleNamespace(**locals())
+   __ref = __scope(__arg1, __arg2)
+   __ref.__doc__ = __scope.__doc__
+   op(__ref.f, __ref.g)
 
 A ``given`` clause is essentially a nested function which is created and
 then immediately executed. Unless explicitly passed in, names are looked
@@ -158,7 +165,7 @@
    yield_stmt: yield_expr [given_clause]
    raise_stmt: 'raise' [test ['from' test]] [given_clause]
    assert_stmt: 'assert' test [',' test] [given_clause]
-   given_clause: "given" (NAME '=' test)* ":" suite
+   given_clause: "given" [(NAME '=' test)+ "in"]":" suite
 
 (Note that ``expr_stmt`` in the grammar is a slight misnomer, as it covers
 assignment and augmented assignment in addition to simple expression
@@ -207,7 +214,7 @@
    flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
 
 In addition to the above changes, the definition of ``atom`` would be changed
-to also allow ``"." NAME``. The restriction of this usage to statements with
+to also allow ``?``. The restriction of this usage to statements with
 an associated ``given`` clause would be handled by a later stage of the
 compilation process (likely AST construction, which already enforces
 other restrictions where the grammar is overly permissive in order to
@@ -277,13 +284,14 @@
 However, while they are the initial motivating use case, limiting this
 feature solely to simple assignments would be overly restrictive. Once the
 feature is defined at all, it would be quite arbitrary to prevent its use
-for augmented assignments, return statements, yield expressions and
-arbitrary expressions that may modify the application state.
+for augmented assignments, return statements, yield expressions,
+comprehensions and arbitrary expressions that may modify the
+application state.
 
 The ``given`` clause may also function as a more readable
 alternative to some uses of lambda expressions and similar
 constructs when passing one-off functions to operations
-like ``sorted()``.
+like ``sorted()`` or in callback based event-driven programming.
 
 In module and class level code, the ``given`` clause will serve as a
 clear and reliable replacement for usage of the ``del`` statement to keep
@@ -350,7 +358,7 @@
 
     # would be equivalent to
 
-    seq2 = .result given seq=seq:
+    seq2 = ?.result given seq=seq:
         result = []
         for y in seq:
             if p(y):
@@ -367,7 +375,7 @@
 provide a precisely equivalent expansion for a generator expression. The
 closest it can get is to define an additional level of scoping::
 
-    seq2 = .g(seq) given:
+    seq2 = ?.g(seq) given:
         def g(seq):
             for y in seq:
                 if p(y):
@@ -375,6 +383,22 @@
                         if q(x):
                             yield x
 
+This limitation could be remedied by permitting the given clause to be
+a generator function, in which case ? would refer to a generator-iterator
+object rather than a simple namespace::
+
+    seq2 = ? given seq=seq in:
+        for y in seq:
+            if p(y):
+                for x in y:
+                    if q(x):
+                        yield x
+
+However, this would make the meaning of "?" quite ambiguous, even more so
+than is already the case for the meaning of ``def`` statements (which will
+usually have a docstring indicating whether or not a function definition is
+actually a generator)
+
 Explaining Decorator Clause Evaluation and Application
 ------------------------------------------------------
 
@@ -477,14 +501,19 @@
 I believe the proposal in this PEP would finally let Python get close to the
 "executable pseudocode" bar for the kind of thought expressed above::
 
-   sorted_list = sorted(original, key=.sort_key) given:
-       def sort_key(item):
+   sorted_list = sorted(original, key=?.key) given:
+       def key(item):
            return item.attr1, item.attr2
 
-Everything is in the same order as it was in the user's original thought, the
-only addition they have to make is to give the sorting criteria a name so that
-the usage can be linked up to the subsequent definition.
-   
+Everything is in the same order as it was in the user's original thought, and
+they don't even need to come up with a name for the sorting criteria: it is
+possible to reuse the keyword argument name directly.
+
+A possible enhancement to those proposal would be to provide a convenient
+shorthand syntax to say "use the given clause contents as keyword
+arguments". Even without dedicated syntax, that can be written simply as
+``**vars(?)``.
+
 
 Harmful to Introspection
 ~~~~~~~~~~~~~~~~~~~~~~~~
@@ -516,7 +545,7 @@
 This is more of a deficiency in the PEP rather than the idea, though. If
 it wasn't a real world problem, we wouldn't get so many complaints about
 the lack of multi-line lambda support and Ruby's block construct
-probaly wouldn't be quite so popular.
+probably wouldn't be quite so popular.
 
 
 Open Questions
@@ -525,9 +554,12 @@
 Syntax for Forward References
 -----------------------------
 
-The leading ``.`` arguably fails the "syntax shall not look like grit on
-Uncle Tim's monitor" test. However, it does have the advantages of being
-easy to type and already having an association with namespaces.
+The ``?`` symbol is proposed for forward references to the given namespace
+as it is short, currently unused and suggests "there's something missing
+here that will be filled in later".
+
+The proposal in the PEP doesn't neatly parallel any existing Python feature,
+so reusing an already used symbol has been deliberately avoided.
 
 
 Handling of ``nonlocal`` and ``global``
@@ -541,8 +573,8 @@
 functions were defined as in the expansion above.
 
 
-Detailed Semantics #3: Handling of ``break`` and ``continue``
--------------------------------------------------------------
+Handling of ``break`` and ``continue``
+--------------------------------------
 
 ``break`` and ``continue`` will operate as if the anonymous functions were
 defined as in the expansion above. They will be syntax errors if they occur
@@ -561,6 +593,25 @@
 Examples
 ========
 
+Defining callbacks for event driven programming::
+
+  # Current Python (definition before use)
+  def cb(sock):
+      # Do something with socket
+  def eb(exc):
+      logging.exception(
+          "Failed connecting to %s:%s", host, port)
+  loop.create_connection((host, port), cb, eb) given:
+
+  # Becomes:
+  loop.create_connection((host, port), ?.cb, ?.eb) given:
+      def cb(sock):
+          # Do something with socket
+      def eb(exc):
+          logging.exception(
+              "Failed connecting to %s:%s", host, port)
+
+
 Defining "one-off" classes which typically only have a single instance::
 
   # Current Python (instantiation after definition)
@@ -579,7 +630,7 @@
     ... # However many lines
 
   # Becomes:
-  public_name = .MeaningfulClassName(*params) given:
+  public_name = ?.MeaningfulClassName(*params) given:
     class MeaningfulClassName():
       ... # Should trawl the stdlib for an example of doing this
 
@@ -593,7 +644,7 @@
   del _createenviron
 
   # Becomes:
-  environ = ._createenviron() given:
+  environ = ?._createenviron() given:
       def _createenviron():
         ... # 27 line function
 
@@ -606,7 +657,7 @@
   return decorating_function
 
   # Becomes:
-  return .decorating_function given:
+  return ?.decorating_function given:
     # Cell variables rather than locals, but should give similar speedup
     tuple, sorted, len, KeyError = tuple, sorted, len, KeyError
     def decorating_function(user_function):
@@ -701,6 +752,9 @@
 .. [9] Possible PEP 3150 style guidelines (#2):
    http://mail.python.org/pipermail/python-ideas/2011-October/012341.html
 
+.. [10] Multi-line lambdas (again!)
+   http://mail.python.org/pipermail/python-ideas/2013-August/022526.html
+
 Copyright
 =========
 
diff --git a/pep-3156.txt b/pep-3156.txt
--- a/pep-3156.txt
+++ b/pep-3156.txt
@@ -846,6 +846,12 @@
   convention from the section "Callback Style" below) is always called
   with a single argument, the Future object.
 
+- ``remove_done_callback(fn)``.  Remove the argument from the list of
+  callbacks.  This method is not defined by PEP 3148.  The argument
+  must be equal (using ``==``) to the argument passed to
+  ``add_done_callback()``.  Returns the number of times the callback
+  was removed.
+
 - ``set_result(result)``.  The Future must not be done (nor cancelled)
   already.  This makes the Future done and schedules the callbacks.
   Difference with PEP 3148: This is a public API.
@@ -1302,25 +1308,25 @@
 
   - ``FIRST_EXCEPTION``: Wait until at least one Future is done (not
     cancelled) with an exception set.  (The exclusion of cancelled
-    Futures from the filter is surprising, but PEP 3148 does it this
-    way.)
+    Futures from the condition is surprising, but PEP 3148 does it
+    this way.)
 
 - ``tulip.as_completed(fs, timeout=None)``.  Returns an iterator whose
-  values are Futures; waiting for successive values waits until the
-  next Future or coroutine from the set ``fs`` completes, and returns
-  its result (or raises its exception).  The optional argument
-  ``timeout`` has the same meaning and default as it does for
-  ``concurrent.futures.wait()``: when the timeout occurs, the next
-  Future returned by the iterator will raise ``TimeoutError`` when
-  waited for.  Example of use::
+  values are Futures or coroutines; waiting for successive values
+  waits until the next Future or coroutine from the set ``fs``
+  completes, and returns its result (or raises its exception).  The
+  optional argument ``timeout`` has the same meaning and default as it
+  does for ``concurrent.futures.wait()``: when the timeout occurs, the
+  next Future returned by the iterator will raise ``TimeoutError``
+  when waited for.  Example of use::
 
     for f in as_completed(fs):
         result = yield from f  # May raise an exception.
         # Use result.
 
-  Note: if you do not wait for the futures as they are produced by the
-  iterator, your ``for`` loop may not make progress (since you are not
-  allowing other tasks to run).
+  Note: if you do not wait for the values produced by the iterator,
+  your ``for`` loop may not make progress (since you are not allowing
+  other tasks to run).
 
 Sleeping
 --------
diff --git a/pep2html.py b/pep2html.py
--- a/pep2html.py
+++ b/pep2html.py
@@ -202,7 +202,7 @@
     print >> outfile, '</td></tr></table>'
     print >> outfile, '<div class="header">\n<table border="0">'
     for k, v in header:
-        if k.lower() in ('author', 'discussions-to'):
+        if k.lower() in ('author', 'bdfl-delegate', 'discussions-to'):
             mailtos = []
             for part in re.split(',\s*', v):
                 if '@' in part:

-- 
Repository URL: http://hg.python.org/peps


More information about the Python-checkins mailing list