[Python-checkins] r64727 - in python/branches/tlee-ast-optimize: Doc/glossary.rst Doc/library/abc.rst Doc/library/pickle.rst Doc/library/rlcompleter.rst Doc/library/shutil.rst Doc/library/stdtypes.rst Doc/library/zipfile.rst Lib/decimal.py Lib/pydoc.py Lib/rlcompleter.py Lib/shutil.py Lib/test/test_cookielib.py Lib/test/test_decimal.py Lib/test/test_multiprocessing.py Lib/test/test_pydoc.py Lib/test/test_shutil.py Lib/test/test_zipfile.py Lib/test/test_zipfile64.py Lib/zipfile.py Misc/NEWS Modules/nismodule.c Python/ceval.c Python/pythonrun.c
thomas.lee
python-checkins at python.org
Sat Jul 5 13:12:43 CEST 2008
Author: thomas.lee
Date: Sat Jul 5 13:12:42 2008
New Revision: 64727
Log:
Merged revisions 64654-64726 via svnmerge from
svn+ssh://pythondev@svn.python.org/python/trunk
........
r64655 | mark.dickinson | 2008-07-02 19:37:01 +1000 (Wed, 02 Jul 2008) | 7 lines
Replace occurrences of '\d' with '[0-9]' in Decimal regex, to make sure
that the behaviour of Decimal doesn't change if/when re.UNICODE becomes
assumed in Python 3.0.
Also add a check that alternative Unicode digits (e.g. u'\N{FULLWIDTH
DIGIT ONE}') are *not* accepted in a numeric string.
........
r64656 | nick.coghlan | 2008-07-02 23:09:19 +1000 (Wed, 02 Jul 2008) | 1 line
Issue 3190: pydoc now hides module __package__ attributes
........
r64663 | jesse.noller | 2008-07-03 02:44:09 +1000 (Thu, 03 Jul 2008) | 1 line
Reenable the manager tests with Amaury's threading fix
........
r64664 | facundo.batista | 2008-07-03 02:52:55 +1000 (Thu, 03 Jul 2008) | 4 lines
Issue #449227: Now with the rlcompleter module, callable objects are
added a '(' when completed.
........
r64665 | jesse.noller | 2008-07-03 02:56:51 +1000 (Thu, 03 Jul 2008) | 1 line
Add #!/usr/bin/env python for ben
........
r64673 | brett.cannon | 2008-07-03 07:40:11 +1000 (Thu, 03 Jul 2008) | 4 lines
FIx some Latin-1 characters to be UTF-8 as the file encoding specifies.
Closes issue #3261. THankjs Leo Soto for the bug report.
........
r64677 | brett.cannon | 2008-07-03 07:52:42 +1000 (Thu, 03 Jul 2008) | 2 lines
Revert r64673 and instead just change the file encoding.
........
r64685 | amaury.forgeotdarc | 2008-07-03 09:40:28 +1000 (Thu, 03 Jul 2008) | 3 lines
Try a blind fix to nismodule which fails on the solaris10 3.0 buildbot:
the GIL must be re-acquired in the callback function
........
r64687 | andrew.kuchling | 2008-07-03 22:50:03 +1000 (Thu, 03 Jul 2008) | 1 line
Tweak wording
........
r64688 | martin.v.loewis | 2008-07-03 22:51:14 +1000 (Thu, 03 Jul 2008) | 9 lines
Patch #1622: Correct interpretation of various ZIP header fields.
Also fixes
- Issue #1526: Allow more than 64k files to be added to Zip64 file.
- Issue #1746: Correct handling of zipfile archive comments (previously
archives with comments over 4k were flagged as invalid). Allow writing
Zip files with archives by setting the 'comment' attribute of a ZipFile.
........
r64689 | benjamin.peterson | 2008-07-03 22:57:35 +1000 (Thu, 03 Jul 2008) | 1 line
lowercase glossary term
........
r64690 | benjamin.peterson | 2008-07-03 23:01:17 +1000 (Thu, 03 Jul 2008) | 1 line
let the term be linked
........
r64702 | georg.brandl | 2008-07-05 03:22:53 +1000 (Sat, 05 Jul 2008) | 2 lines
Give the pickle special methods a signature.
........
r64719 | raymond.hettinger | 2008-07-05 12:11:55 +1000 (Sat, 05 Jul 2008) | 1 line
Update comment on prediction macros.
........
r64721 | georg.brandl | 2008-07-05 20:07:18 +1000 (Sat, 05 Jul 2008) | 2 lines
Fix tabs.
........
r64722 | georg.brandl | 2008-07-05 20:13:36 +1000 (Sat, 05 Jul 2008) | 4 lines
#2663: support an *ignore* argument to shutil.copytree(). Patch by Tarek Ziade.
This is a new feature, but Barry authorized adding it in the beta period.
........
Modified:
python/branches/tlee-ast-optimize/ (props changed)
python/branches/tlee-ast-optimize/Doc/glossary.rst
python/branches/tlee-ast-optimize/Doc/library/abc.rst
python/branches/tlee-ast-optimize/Doc/library/pickle.rst
python/branches/tlee-ast-optimize/Doc/library/rlcompleter.rst
python/branches/tlee-ast-optimize/Doc/library/shutil.rst
python/branches/tlee-ast-optimize/Doc/library/stdtypes.rst
python/branches/tlee-ast-optimize/Doc/library/zipfile.rst
python/branches/tlee-ast-optimize/Lib/decimal.py
python/branches/tlee-ast-optimize/Lib/pydoc.py
python/branches/tlee-ast-optimize/Lib/rlcompleter.py
python/branches/tlee-ast-optimize/Lib/shutil.py
python/branches/tlee-ast-optimize/Lib/test/test_cookielib.py
python/branches/tlee-ast-optimize/Lib/test/test_decimal.py
python/branches/tlee-ast-optimize/Lib/test/test_multiprocessing.py
python/branches/tlee-ast-optimize/Lib/test/test_pydoc.py
python/branches/tlee-ast-optimize/Lib/test/test_shutil.py
python/branches/tlee-ast-optimize/Lib/test/test_zipfile.py
python/branches/tlee-ast-optimize/Lib/test/test_zipfile64.py
python/branches/tlee-ast-optimize/Lib/zipfile.py
python/branches/tlee-ast-optimize/Misc/NEWS
python/branches/tlee-ast-optimize/Modules/nismodule.c
python/branches/tlee-ast-optimize/Python/ceval.c
python/branches/tlee-ast-optimize/Python/pythonrun.c
Modified: python/branches/tlee-ast-optimize/Doc/glossary.rst
==============================================================================
--- python/branches/tlee-ast-optimize/Doc/glossary.rst (original)
+++ python/branches/tlee-ast-optimize/Doc/glossary.rst Sat Jul 5 13:12:42 2008
@@ -24,7 +24,7 @@
2to3 is available in the standard library as :mod:`lib2to3`; a standalone
entry point is provided as :file:`Tools/scripts/2to3`.
- Abstract Base Class
+ abstract base class
Abstract Base Classes (abbreviated ABCs) complement :term:`duck-typing` by
providing a way to define interfaces when other techniques like :func:`hasattr`
would be clumsy. Python comes with many builtin ABCs for data structures
Modified: python/branches/tlee-ast-optimize/Doc/library/abc.rst
==============================================================================
--- python/branches/tlee-ast-optimize/Doc/library/abc.rst (original)
+++ python/branches/tlee-ast-optimize/Doc/library/abc.rst Sat Jul 5 13:12:42 2008
@@ -9,8 +9,8 @@
.. versionadded:: 2.6
-This module provides the infrastructure for defining :term:`abstract base
-classes` (ABCs) in Python, as outlined in :pep:`3119`; see the PEP for why this
+This module provides the infrastructure for defining an :term:`abstract base
+class` (ABCs) in Python, as outlined in :pep:`3119`; see the PEP for why this
was added to Python. (See also :pep:`3141` and the :mod:`numbers` module
regarding a type hierarchy for numbers based on ABCs.)
Modified: python/branches/tlee-ast-optimize/Doc/library/pickle.rst
==============================================================================
--- python/branches/tlee-ast-optimize/Doc/library/pickle.rst (original)
+++ python/branches/tlee-ast-optimize/Doc/library/pickle.rst Sat Jul 5 13:12:42 2008
@@ -396,6 +396,8 @@
The pickle protocol
-------------------
+.. currentmodule:: None
+
This section describes the "pickling protocol" that defines the interface
between the pickler/unpickler and the objects that are being serialized. This
protocol provides a standard way for you to define, customize, and control how
@@ -410,129 +412,126 @@
Pickling and unpickling normal class instances
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-.. index::
- single: __getinitargs__() (copy protocol)
- single: __init__() (instance constructor)
-
-When a pickled class instance is unpickled, its :meth:`__init__` method is
-normally *not* invoked. If it is desirable that the :meth:`__init__` method be
-called on unpickling, an old-style class can define a method
-:meth:`__getinitargs__`, which should return a *tuple* containing the arguments
-to be passed to the class constructor (:meth:`__init__` for example). The
-:meth:`__getinitargs__` method is called at pickle time; the tuple it returns is
-incorporated in the pickle for the instance.
-
-.. index:: single: __getnewargs__() (copy protocol)
-
-New-style types can provide a :meth:`__getnewargs__` method that is used for
-protocol 2. Implementing this method is needed if the type establishes some
-internal invariants when the instance is created, or if the memory allocation is
-affected by the values passed to the :meth:`__new__` method for the type (as it
-is for tuples and strings). Instances of a :term:`new-style class` :class:`C`
-are created using ::
-
- obj = C.__new__(C, *args)
-
-
-where *args* is the result of calling :meth:`__getnewargs__` on the original
-object; if there is no :meth:`__getnewargs__`, an empty tuple is assumed.
-
-.. index::
- single: __getstate__() (copy protocol)
- single: __setstate__() (copy protocol)
- single: __dict__ (instance attribute)
-
-Classes can further influence how their instances are pickled; if the class
-defines the method :meth:`__getstate__`, it is called and the return state is
-pickled as the contents for the instance, instead of the contents of the
-instance's dictionary. If there is no :meth:`__getstate__` method, the
-instance's :attr:`__dict__` is pickled.
-
-Upon unpickling, if the class also defines the method :meth:`__setstate__`, it
-is called with the unpickled state. [#]_ If there is no :meth:`__setstate__`
-method, the pickled state must be a dictionary and its items are assigned to the
-new instance's dictionary. If a class defines both :meth:`__getstate__` and
-:meth:`__setstate__`, the state object needn't be a dictionary and these methods
-can do what they want. [#]_
-
-.. warning::
-
- For :term:`new-style class`\es, if :meth:`__getstate__` returns a false
- value, the :meth:`__setstate__` method will not be called.
+.. method:: object.__getinitargs__()
+
+ When a pickled class instance is unpickled, its :meth:`__init__` method is
+ normally *not* invoked. If it is desirable that the :meth:`__init__` method
+ be called on unpickling, an old-style class can define a method
+ :meth:`__getinitargs__`, which should return a *tuple* containing the
+ arguments to be passed to the class constructor (:meth:`__init__` for
+ example). The :meth:`__getinitargs__` method is called at pickle time; the
+ tuple it returns is incorporated in the pickle for the instance.
+
+.. method:: object.__getnewargs__()
+
+ New-style types can provide a :meth:`__getnewargs__` method that is used for
+ protocol 2. Implementing this method is needed if the type establishes some
+ internal invariants when the instance is created, or if the memory allocation
+ is affected by the values passed to the :meth:`__new__` method for the type
+ (as it is for tuples and strings). Instances of a :term:`new-style class`
+ ``C`` are created using ::
+
+ obj = C.__new__(C, *args)
+
+ where *args* is the result of calling :meth:`__getnewargs__` on the original
+ object; if there is no :meth:`__getnewargs__`, an empty tuple is assumed.
+
+.. method:: object.__getstate__()
+
+ Classes can further influence how their instances are pickled; if the class
+ defines the method :meth:`__getstate__`, it is called and the return state is
+ pickled as the contents for the instance, instead of the contents of the
+ instance's dictionary. If there is no :meth:`__getstate__` method, the
+ instance's :attr:`__dict__` is pickled.
+
+.. method:: object.__setstate__()
+
+ Upon unpickling, if the class also defines the method :meth:`__setstate__`,
+ it is called with the unpickled state. [#]_ If there is no
+ :meth:`__setstate__` method, the pickled state must be a dictionary and its
+ items are assigned to the new instance's dictionary. If a class defines both
+ :meth:`__getstate__` and :meth:`__setstate__`, the state object needn't be a
+ dictionary and these methods can do what they want. [#]_
+
+ .. warning::
+
+ For :term:`new-style class`\es, if :meth:`__getstate__` returns a false
+ value, the :meth:`__setstate__` method will not be called.
Pickling and unpickling extension types
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-.. index::
- single: __reduce__() (pickle protocol)
- single: __reduce_ex__() (pickle protocol)
- single: __safe_for_unpickling__ (pickle protocol)
-
-When the :class:`Pickler` encounters an object of a type it knows nothing about
---- such as an extension type --- it looks in two places for a hint of how to
-pickle it. One alternative is for the object to implement a :meth:`__reduce__`
-method. If provided, at pickling time :meth:`__reduce__` will be called with no
-arguments, and it must return either a string or a tuple.
-
-If a string is returned, it names a global variable whose contents are pickled
-as normal. The string returned by :meth:`__reduce__` should be the object's
-local name relative to its module; the pickle module searches the module
-namespace to determine the object's module.
-
-When a tuple is returned, it must be between two and five elements long.
-Optional elements can either be omitted, or ``None`` can be provided as their
-value. The contents of this tuple are pickled as normal and used to
-reconstruct the object at unpickling time. The semantics of each element are:
-
-* A callable object that will be called to create the initial version of the
- object. The next element of the tuple will provide arguments for this callable,
- and later elements provide additional state information that will subsequently
- be used to fully reconstruct the pickled data.
-
- In the unpickling environment this object must be either a class, a callable
- registered as a "safe constructor" (see below), or it must have an attribute
- :attr:`__safe_for_unpickling__` with a true value. Otherwise, an
- :exc:`UnpicklingError` will be raised in the unpickling environment. Note that
- as usual, the callable itself is pickled by name.
-
-* A tuple of arguments for the callable object.
-
- .. versionchanged:: 2.5
- Formerly, this argument could also be ``None``.
-
-* Optionally, the object's state, which will be passed to the object's
- :meth:`__setstate__` method as described in section :ref:`pickle-inst`. If the
- object has no :meth:`__setstate__` method, then, as above, the value must be a
- dictionary and it will be added to the object's :attr:`__dict__`.
-
-* Optionally, an iterator (and not a sequence) yielding successive list items.
- These list items will be pickled, and appended to the object using either
- ``obj.append(item)`` or ``obj.extend(list_of_items)``. This is primarily used
- for list subclasses, but may be used by other classes as long as they have
- :meth:`append` and :meth:`extend` methods with the appropriate signature.
- (Whether :meth:`append` or :meth:`extend` is used depends on which pickle
- protocol version is used as well as the number of items to append, so both must
- be supported.)
-
-* Optionally, an iterator (not a sequence) yielding successive dictionary items,
- which should be tuples of the form ``(key, value)``. These items will be
- pickled and stored to the object using ``obj[key] = value``. This is primarily
- used for dictionary subclasses, but may be used by other classes as long as they
- implement :meth:`__setitem__`.
-
-It is sometimes useful to know the protocol version when implementing
-:meth:`__reduce__`. This can be done by implementing a method named
-:meth:`__reduce_ex__` instead of :meth:`__reduce__`. :meth:`__reduce_ex__`, when
-it exists, is called in preference over :meth:`__reduce__` (you may still
-provide :meth:`__reduce__` for backwards compatibility). The
-:meth:`__reduce_ex__` method will be called with a single integer argument, the
-protocol version.
-
-The :class:`object` class implements both :meth:`__reduce__` and
-:meth:`__reduce_ex__`; however, if a subclass overrides :meth:`__reduce__` but
-not :meth:`__reduce_ex__`, the :meth:`__reduce_ex__` implementation detects this
-and calls :meth:`__reduce__`.
+.. method:: object.__reduce__()
+
+ When the :class:`Pickler` encounters an object of a type it knows nothing
+ about --- such as an extension type --- it looks in two places for a hint of
+ how to pickle it. One alternative is for the object to implement a
+ :meth:`__reduce__` method. If provided, at pickling time :meth:`__reduce__`
+ will be called with no arguments, and it must return either a string or a
+ tuple.
+
+ If a string is returned, it names a global variable whose contents are
+ pickled as normal. The string returned by :meth:`__reduce__` should be the
+ object's local name relative to its module; the pickle module searches the
+ module namespace to determine the object's module.
+
+ When a tuple is returned, it must be between two and five elements long.
+ Optional elements can either be omitted, or ``None`` can be provided as their
+ value. The contents of this tuple are pickled as normal and used to
+ reconstruct the object at unpickling time. The semantics of each element
+ are:
+
+ * A callable object that will be called to create the initial version of the
+ object. The next element of the tuple will provide arguments for this
+ callable, and later elements provide additional state information that will
+ subsequently be used to fully reconstruct the pickled data.
+
+ In the unpickling environment this object must be either a class, a
+ callable registered as a "safe constructor" (see below), or it must have an
+ attribute :attr:`__safe_for_unpickling__` with a true value. Otherwise, an
+ :exc:`UnpicklingError` will be raised in the unpickling environment. Note
+ that as usual, the callable itself is pickled by name.
+
+ * A tuple of arguments for the callable object.
+
+ .. versionchanged:: 2.5
+ Formerly, this argument could also be ``None``.
+
+ * Optionally, the object's state, which will be passed to the object's
+ :meth:`__setstate__` method as described in section :ref:`pickle-inst`. If
+ the object has no :meth:`__setstate__` method, then, as above, the value
+ must be a dictionary and it will be added to the object's :attr:`__dict__`.
+
+ * Optionally, an iterator (and not a sequence) yielding successive list
+ items. These list items will be pickled, and appended to the object using
+ either ``obj.append(item)`` or ``obj.extend(list_of_items)``. This is
+ primarily used for list subclasses, but may be used by other classes as
+ long as they have :meth:`append` and :meth:`extend` methods with the
+ appropriate signature. (Whether :meth:`append` or :meth:`extend` is used
+ depends on which pickle protocol version is used as well as the number of
+ items to append, so both must be supported.)
+
+ * Optionally, an iterator (not a sequence) yielding successive dictionary
+ items, which should be tuples of the form ``(key, value)``. These items
+ will be pickled and stored to the object using ``obj[key] = value``. This
+ is primarily used for dictionary subclasses, but may be used by other
+ classes as long as they implement :meth:`__setitem__`.
+
+.. method:: object.__reduce_ex__(protocol)
+
+ It is sometimes useful to know the protocol version when implementing
+ :meth:`__reduce__`. This can be done by implementing a method named
+ :meth:`__reduce_ex__` instead of :meth:`__reduce__`. :meth:`__reduce_ex__`,
+ when it exists, is called in preference over :meth:`__reduce__` (you may
+ still provide :meth:`__reduce__` for backwards compatibility). The
+ :meth:`__reduce_ex__` method will be called with a single integer argument,
+ the protocol version.
+
+ The :class:`object` class implements both :meth:`__reduce__` and
+ :meth:`__reduce_ex__`; however, if a subclass overrides :meth:`__reduce__`
+ but not :meth:`__reduce_ex__`, the :meth:`__reduce_ex__` implementation
+ detects this and calls :meth:`__reduce__`.
An alternative to implementing a :meth:`__reduce__` method on the object to be
pickled, is to register the callable with the :mod:`copy_reg` module. This
Modified: python/branches/tlee-ast-optimize/Doc/library/rlcompleter.rst
==============================================================================
--- python/branches/tlee-ast-optimize/Doc/library/rlcompleter.rst (original)
+++ python/branches/tlee-ast-optimize/Doc/library/rlcompleter.rst Sat Jul 5 13:12:42 2008
@@ -20,9 +20,9 @@
>>> import readline
>>> readline.parse_and_bind("tab: complete")
>>> readline. <TAB PRESSED>
- readline.__doc__ readline.get_line_buffer readline.read_init_file
- readline.__file__ readline.insert_text readline.set_completer
- readline.__name__ readline.parse_and_bind
+ readline.__doc__ readline.get_line_buffer( readline.read_init_file(
+ readline.__file__ readline.insert_text( readline.set_completer(
+ readline.__name__ readline.parse_and_bind(
>>> readline.
The :mod:`rlcompleter` module is designed for use with Python's interactive
Modified: python/branches/tlee-ast-optimize/Doc/library/shutil.rst
==============================================================================
--- python/branches/tlee-ast-optimize/Doc/library/shutil.rst (original)
+++ python/branches/tlee-ast-optimize/Doc/library/shutil.rst Sat Jul 5 13:12:42 2008
@@ -78,18 +78,41 @@
Unix command :program:`cp -p`.
-.. function:: copytree(src, dst[, symlinks])
+.. function:: ignore_patterns(\*patterns)
+
+ This factory function creates a function that can be used as a callable for
+ :func:`copytree`\'s *ignore* argument, ignoring files and directories that
+ match one the glob-style *patterns* provided. See the example below.
+
+ .. versionadded:: 2.6
+
+
+.. function:: copytree(src, dst[, symlinks=False[, ignore=None]])
Recursively copy an entire directory tree rooted at *src*. The destination
- directory, named by *dst*, must not already exist; it will be created as well as
- missing parent directories. Permissions and times of directories are copied with
- :func:`copystat`, individual files are copied using :func:`copy2`. If
- *symlinks* is true, symbolic links in the source tree are represented as
- symbolic links in the new tree; if false or omitted, the contents of the linked
- files are copied to the new tree. If exception(s) occur, an :exc:`Error` is
- raised with a list of reasons.
+ directory, named by *dst*, must not already exist; it will be created as well
+ as missing parent directories. Permissions and times of directories are
+ copied with :func:`copystat`, individual files are copied using
+ :func:`copy2`.
+
+ If *symlinks* is true, symbolic links in the source tree are represented as
+ symbolic links in the new tree; if false or omitted, the contents of the
+ linked files are copied to the new tree.
+
+ If *ignore* is given, it must be a callable that will receive as its
+ arguments the directory being visited by :func:`copytree`, and a list of its
+ contents, as returned by :func:`os.listdir`. Since :func:`copytree` is
+ called recursively, the *ignore* callable will be called once for each
+ directory that is copied. The callable must return a sequence of directory
+ and file names relative to the current directory (i.e. a subset of the items
+ in its second argument); these names will then be ignored in the copy
+ process. :func:`ignore_patterns` can be used to create such a callable that
+ ignores names based on glob-style patterns.
- The source code for this should be considered an example rather than a tool.
+ If exception(s) occur, an :exc:`Error` is raised with a list of reasons.
+
+ The source code for this should be considered an example rather than the
+ ultimate tool.
.. versionchanged:: 2.3
:exc:`Error` is raised if any exceptions occur during copying, rather than
@@ -99,6 +122,9 @@
Create intermediate directories needed to create *dst*, rather than raising an
error. Copy permissions and times of directories using :func:`copystat`.
+ .. versionchanged:: 2.6
+ Added the *ignore* argument to be able to influence what is being copied.
+
.. function:: rmtree(path[, ignore_errors[, onerror]])
@@ -152,11 +178,18 @@
above, with the docstring omitted. It demonstrates many of the other functions
provided by this module. ::
- def copytree(src, dst, symlinks=False):
+ def copytree(src, dst, symlinks=False, ignore=None):
names = os.listdir(src)
+ if ignore is not None:
+ ignored_names = ignore(src, names)
+ else:
+ ignored_names = set()
+
os.makedirs(dst)
errors = []
for name in names:
+ if name in ignored_names:
+ continue
srcname = os.path.join(src, name)
dstname = os.path.join(dst, name)
try:
@@ -164,7 +197,7 @@
linkto = os.readlink(srcname)
os.symlink(linkto, dstname)
elif os.path.isdir(srcname):
- copytree(srcname, dstname, symlinks)
+ copytree(srcname, dstname, symlinks, ignore)
else:
copy2(srcname, dstname)
# XXX What about devices, sockets etc.?
@@ -183,3 +216,24 @@
errors.extend((src, dst, str(why)))
if errors:
raise Error, errors
+
+Another example that uses the :func:`ignore_patterns` helper::
+
+ from shutil import copytree, ignore_patterns
+
+ copytree(source, destination, ignore=ignore_patterns('*.pyc', 'tmp*'))
+
+This will copy everything except ``.pyc`` files and files or directories whose
+name starts with ``tmp``.
+
+Another example that uses the *ignore* argument to add a logging call::
+
+ from shutil import copytree
+ import logging
+
+ def _logpath(path, names):
+ logging.info('Working in %s' % path)
+ return [] # nothing will be ignored
+
+ copytree(source, destination, ignore=_logpath)
+
Modified: python/branches/tlee-ast-optimize/Doc/library/stdtypes.rst
==============================================================================
--- python/branches/tlee-ast-optimize/Doc/library/stdtypes.rst (original)
+++ python/branches/tlee-ast-optimize/Doc/library/stdtypes.rst Sat Jul 5 13:12:42 2008
@@ -2055,12 +2055,12 @@
files, like ttys, it makes sense to continue reading after an EOF is hit.) Note
that this method may call the underlying C function :cfunc:`fread` more than
once in an effort to acquire as close to *size* bytes as possible. Also note
- that when in non-blocking mode, less data than what was requested may be
+ that when in non-blocking mode, less data than was requested may be
returned, even if no *size* parameter was given.
.. note::
- As this function depends of the underlying C function :cfunc:`fread`,
- it resembles its behaviour in details like caching EOF and others.
+ As this function depends on the underlying :cfunc:`fread` C function,
+ it will behave the same in details such as caching EOF.
.. method:: file.readline([size])
Modified: python/branches/tlee-ast-optimize/Doc/library/zipfile.rst
==============================================================================
--- python/branches/tlee-ast-optimize/Doc/library/zipfile.rst (original)
+++ python/branches/tlee-ast-optimize/Doc/library/zipfile.rst Sat Jul 5 13:12:42 2008
@@ -285,7 +285,7 @@
member of the given :class:`ZipInfo` instance. By default, the
:class:`ZipInfo` constructor sets this member to :const:`ZIP_STORED`.
-The following data attribute is also available:
+The following data attributes are also available:
.. attribute:: ZipFile.debug
@@ -294,6 +294,12 @@
output) to ``3`` (the most output). Debugging information is written to
``sys.stdout``.
+.. attribute:: ZipFile.comment
+
+ The comment text associated with the ZIP file. If assigning a comment to a
+ :class:`ZipFile` instance created with mode 'a' or 'w', this should be a
+ string no longer than 65535 bytes. Comments longer than this will be
+ truncated in the written archive when :meth:`ZipFile.close` is called.
.. _pyzipfile-objects:
Modified: python/branches/tlee-ast-optimize/Lib/decimal.py
==============================================================================
--- python/branches/tlee-ast-optimize/Lib/decimal.py (original)
+++ python/branches/tlee-ast-optimize/Lib/decimal.py Sat Jul 5 13:12:42 2008
@@ -5337,20 +5337,20 @@
# other meaning for \d than the numbers [0-9].
import re
-_parser = re.compile(r""" # A numeric string consists of:
+_parser = re.compile(r""" # A numeric string consists of:
# \s*
- (?P<sign>[-+])? # an optional sign, followed by either...
+ (?P<sign>[-+])? # an optional sign, followed by either...
(
- (?=\d|\.\d) # ...a number (with at least one digit)
- (?P<int>\d*) # consisting of a (possibly empty) integer part
- (\.(?P<frac>\d*))? # followed by an optional fractional part
- (E(?P<exp>[-+]?\d+))? # followed by an optional exponent, or...
+ (?=[0-9]|\.[0-9]) # ...a number (with at least one digit)
+ (?P<int>[0-9]*) # having a (possibly empty) integer part
+ (\.(?P<frac>[0-9]*))? # followed by an optional fractional part
+ (E(?P<exp>[-+]?[0-9]+))? # followed by an optional exponent, or...
|
- Inf(inity)? # ...an infinity, or...
+ Inf(inity)? # ...an infinity, or...
|
- (?P<signal>s)? # ...an (optionally signaling)
- NaN # NaN
- (?P<diag>\d*) # with (possibly empty) diagnostic information.
+ (?P<signal>s)? # ...an (optionally signaling)
+ NaN # NaN
+ (?P<diag>[0-9]*) # with (possibly empty) diagnostic info.
)
# \s*
\Z
Modified: python/branches/tlee-ast-optimize/Lib/pydoc.py
==============================================================================
--- python/branches/tlee-ast-optimize/Lib/pydoc.py (original)
+++ python/branches/tlee-ast-optimize/Lib/pydoc.py Sat Jul 5 13:12:42 2008
@@ -160,8 +160,9 @@
def visiblename(name, all=None):
"""Decide whether to show documentation on a variable."""
# Certain special names are redundant.
- if name in ('__builtins__', '__doc__', '__file__', '__path__',
- '__module__', '__name__', '__slots__'): return 0
+ _hidden_names = ('__builtins__', '__doc__', '__file__', '__path__',
+ '__module__', '__name__', '__slots__', '__package__')
+ if name in _hidden_names: return 0
# Private names are hidden, but special names are displayed.
if name.startswith('__') and name.endswith('__'): return 1
if all is not None:
Modified: python/branches/tlee-ast-optimize/Lib/rlcompleter.py
==============================================================================
--- python/branches/tlee-ast-optimize/Lib/rlcompleter.py (original)
+++ python/branches/tlee-ast-optimize/Lib/rlcompleter.py Sat Jul 5 13:12:42 2008
@@ -92,6 +92,11 @@
except IndexError:
return None
+ def _callable_postfix(self, val, word):
+ if callable(val):
+ word = word + "("
+ return word
+
def global_matches(self, text):
"""Compute matches when text is a simple name.
@@ -102,12 +107,13 @@
import keyword
matches = []
n = len(text)
- for list in [keyword.kwlist,
- __builtin__.__dict__,
- self.namespace]:
- for word in list:
+ for word in keyword.kwlist:
+ if word[:n] == text:
+ matches.append(word)
+ for nspace in [__builtin__.__dict__, self.namespace]:
+ for word, val in nspace.items():
if word[:n] == text and word != "__builtins__":
- matches.append(word)
+ matches.append(self._callable_postfix(val, word))
return matches
def attr_matches(self, text):
@@ -139,7 +145,9 @@
n = len(attr)
for word in words:
if word[:n] == attr and word != "__builtins__":
- matches.append("%s.%s" % (expr, word))
+ val = getattr(object, word)
+ word = self._callable_postfix(val, "%s.%s" % (expr, word))
+ matches.append(word)
return matches
def get_class_members(klass):
Modified: python/branches/tlee-ast-optimize/Lib/shutil.py
==============================================================================
--- python/branches/tlee-ast-optimize/Lib/shutil.py (original)
+++ python/branches/tlee-ast-optimize/Lib/shutil.py Sat Jul 5 13:12:42 2008
@@ -8,6 +8,7 @@
import sys
import stat
from os.path import abspath
+import fnmatch
__all__ = ["copyfileobj","copyfile","copymode","copystat","copy","copy2",
"copytree","move","rmtree","Error"]
@@ -93,8 +94,19 @@
copyfile(src, dst)
copystat(src, dst)
+def ignore_patterns(*patterns):
+ """Function that can be used as copytree() ignore parameter.
-def copytree(src, dst, symlinks=False):
+ Patterns is a sequence of glob-style patterns
+ that are used to exclude files"""
+ def _ignore_patterns(path, names):
+ ignored_names = []
+ for pattern in patterns:
+ ignored_names.extend(fnmatch.filter(names, pattern))
+ return set(ignored_names)
+ return _ignore_patterns
+
+def copytree(src, dst, symlinks=False, ignore=None):
"""Recursively copy a directory tree using copy2().
The destination directory must not already exist.
@@ -105,13 +117,32 @@
it is false, the contents of the files pointed to by symbolic
links are copied.
+ The optional ignore argument is a callable. If given, it
+ is called with the `src` parameter, which is the directory
+ being visited by copytree(), and `names` which is the list of
+ `src` contents, as returned by os.listdir():
+
+ callable(src, names) -> ignored_names
+
+ Since copytree() is called recursively, the callable will be
+ called once for each directory that is copied. It returns a
+ list of names relative to the `src` directory that should
+ not be copied.
+
XXX Consider this example code rather than the ultimate tool.
"""
names = os.listdir(src)
+ if ignore is not None:
+ ignored_names = ignore(src, names)
+ else:
+ ignored_names = set()
+
os.makedirs(dst)
errors = []
for name in names:
+ if name in ignored_names:
+ continue
srcname = os.path.join(src, name)
dstname = os.path.join(dst, name)
try:
@@ -119,7 +150,7 @@
linkto = os.readlink(srcname)
os.symlink(linkto, dstname)
elif os.path.isdir(srcname):
- copytree(srcname, dstname, symlinks)
+ copytree(srcname, dstname, symlinks, ignore)
else:
copy2(srcname, dstname)
# XXX What about devices, sockets etc.?
Modified: python/branches/tlee-ast-optimize/Lib/test/test_cookielib.py
==============================================================================
--- python/branches/tlee-ast-optimize/Lib/test/test_cookielib.py (original)
+++ python/branches/tlee-ast-optimize/Lib/test/test_cookielib.py Sat Jul 5 13:12:42 2008
@@ -1,4 +1,4 @@
-# -*- coding: utf-8 -*-
+# -*- coding: latin-1 -*-
"""Tests for cookielib.py."""
import re, os, time
Modified: python/branches/tlee-ast-optimize/Lib/test/test_decimal.py
==============================================================================
--- python/branches/tlee-ast-optimize/Lib/test/test_decimal.py (original)
+++ python/branches/tlee-ast-optimize/Lib/test/test_decimal.py Sat Jul 5 13:12:42 2008
@@ -432,6 +432,9 @@
self.assertEqual(str(Decimal(u'-Inf')), '-Infinity')
self.assertEqual(str(Decimal(u'NaN123')), 'NaN123')
+ #but alternate unicode digits should not
+ self.assertEqual(str(Decimal(u'\uff11')), 'NaN')
+
def test_explicit_from_tuples(self):
#zero
Modified: python/branches/tlee-ast-optimize/Lib/test/test_multiprocessing.py
==============================================================================
--- python/branches/tlee-ast-optimize/Lib/test/test_multiprocessing.py (original)
+++ python/branches/tlee-ast-optimize/Lib/test/test_multiprocessing.py Sat Jul 5 13:12:42 2008
@@ -1,3 +1,5 @@
+#!/usr/bin/env python
+
#
# Unit tests for the multiprocessing package
#
@@ -960,7 +962,6 @@
def sqr(x, wait=0.0):
time.sleep(wait)
return x*x
-"""
class _TestPool(BaseTestCase):
def test_apply(self):
@@ -1030,7 +1031,6 @@
join = TimingWrapper(self.pool.join)
join()
self.assertTrue(join.elapsed < 0.2)
-"""
#
# Test that manager has expected number of shared objects left
#
@@ -1333,7 +1333,6 @@
self.assertRaises(ValueError, a.send_bytes, msg, 4, -1)
-"""
class _TestListenerClient(BaseTestCase):
ALLOWED_TYPES = ('processes', 'threads')
@@ -1353,7 +1352,6 @@
self.assertEqual(conn.recv(), 'hello')
p.join()
l.close()
-"""
#
# Test of sending connection and socket objects between processes
#
@@ -1769,28 +1767,28 @@
multiprocessing.get_logger().setLevel(LOG_LEVEL)
- #ProcessesMixin.pool = multiprocessing.Pool(4)
- #ThreadsMixin.pool = multiprocessing.dummy.Pool(4)
- #ManagerMixin.manager.__init__()
- #ManagerMixin.manager.start()
- #ManagerMixin.pool = ManagerMixin.manager.Pool(4)
+ ProcessesMixin.pool = multiprocessing.Pool(4)
+ ThreadsMixin.pool = multiprocessing.dummy.Pool(4)
+ ManagerMixin.manager.__init__()
+ ManagerMixin.manager.start()
+ ManagerMixin.pool = ManagerMixin.manager.Pool(4)
testcases = (
- sorted(testcases_processes.values(), key=lambda tc:tc.__name__) #+
- #sorted(testcases_threads.values(), key=lambda tc:tc.__name__) +
- #sorted(testcases_manager.values(), key=lambda tc:tc.__name__)
+ sorted(testcases_processes.values(), key=lambda tc:tc.__name__) +
+ sorted(testcases_threads.values(), key=lambda tc:tc.__name__) +
+ sorted(testcases_manager.values(), key=lambda tc:tc.__name__)
)
loadTestsFromTestCase = unittest.defaultTestLoader.loadTestsFromTestCase
suite = unittest.TestSuite(loadTestsFromTestCase(tc) for tc in testcases)
run(suite)
- #ThreadsMixin.pool.terminate()
- #ProcessesMixin.pool.terminate()
- #ManagerMixin.pool.terminate()
- #ManagerMixin.manager.shutdown()
+ ThreadsMixin.pool.terminate()
+ ProcessesMixin.pool.terminate()
+ ManagerMixin.pool.terminate()
+ ManagerMixin.manager.shutdown()
- #del ProcessesMixin.pool, ThreadsMixin.pool, ManagerMixin.pool
+ del ProcessesMixin.pool, ThreadsMixin.pool, ManagerMixin.pool
def main():
test_main(unittest.TextTestRunner(verbosity=2).run)
Modified: python/branches/tlee-ast-optimize/Lib/test/test_pydoc.py
==============================================================================
--- python/branches/tlee-ast-optimize/Lib/test/test_pydoc.py (original)
+++ python/branches/tlee-ast-optimize/Lib/test/test_pydoc.py Sat Jul 5 13:12:42 2008
@@ -57,7 +57,6 @@
DATA
__author__ = 'Benjamin Peterson'
__credits__ = 'Nobody'
- __package__ = None
__version__ = '1.2.3.4'
VERSION
@@ -146,7 +145,6 @@
<tr><td bgcolor="#55aa55"><tt> </tt></td><td> </td>
<td width="100%%"><strong>__author__</strong> = 'Benjamin Peterson'<br>
<strong>__credits__</strong> = 'Nobody'<br>
-<strong>__package__</strong> = None<br>
<strong>__version__</strong> = '1.2.3.4'</td></tr></table><p>
<table width="100%%" cellspacing=0 cellpadding=2 border=0 summary="section">
<tr bgcolor="#7799ee">
Modified: python/branches/tlee-ast-optimize/Lib/test/test_shutil.py
==============================================================================
--- python/branches/tlee-ast-optimize/Lib/test/test_shutil.py (original)
+++ python/branches/tlee-ast-optimize/Lib/test/test_shutil.py Sat Jul 5 13:12:42 2008
@@ -108,6 +108,82 @@
if os.path.exists(path):
shutil.rmtree(path)
+ def test_copytree_with_exclude(self):
+
+ def write_data(path, data):
+ f = open(path, "w")
+ f.write(data)
+ f.close()
+
+ def read_data(path):
+ f = open(path)
+ data = f.read()
+ f.close()
+ return data
+
+ # creating data
+ join = os.path.join
+ exists = os.path.exists
+ src_dir = tempfile.mkdtemp()
+ dst_dir = join(tempfile.mkdtemp(), 'destination')
+ write_data(join(src_dir, 'test.txt'), '123')
+ write_data(join(src_dir, 'test.tmp'), '123')
+ os.mkdir(join(src_dir, 'test_dir'))
+ write_data(join(src_dir, 'test_dir', 'test.txt'), '456')
+ os.mkdir(join(src_dir, 'test_dir2'))
+ write_data(join(src_dir, 'test_dir2', 'test.txt'), '456')
+ os.mkdir(join(src_dir, 'test_dir2', 'subdir'))
+ os.mkdir(join(src_dir, 'test_dir2', 'subdir2'))
+ write_data(join(src_dir, 'test_dir2', 'subdir', 'test.txt'), '456')
+ write_data(join(src_dir, 'test_dir2', 'subdir2', 'test.py'), '456')
+
+
+ # testing glob-like patterns
+ try:
+ patterns = shutil.ignore_patterns('*.tmp', 'test_dir2')
+ shutil.copytree(src_dir, dst_dir, ignore=patterns)
+ # checking the result: some elements should not be copied
+ self.assert_(exists(join(dst_dir, 'test.txt')))
+ self.assert_(not exists(join(dst_dir, 'test.tmp')))
+ self.assert_(not exists(join(dst_dir, 'test_dir2')))
+ finally:
+ if os.path.exists(dst_dir):
+ shutil.rmtree(dst_dir)
+ try:
+ patterns = shutil.ignore_patterns('*.tmp', 'subdir*')
+ shutil.copytree(src_dir, dst_dir, ignore=patterns)
+ # checking the result: some elements should not be copied
+ self.assert_(not exists(join(dst_dir, 'test.tmp')))
+ self.assert_(not exists(join(dst_dir, 'test_dir2', 'subdir2')))
+ self.assert_(not exists(join(dst_dir, 'test_dir2', 'subdir')))
+ finally:
+ if os.path.exists(dst_dir):
+ shutil.rmtree(dst_dir)
+
+ # testing callable-style
+ try:
+ def _filter(src, names):
+ res = []
+ for name in names:
+ path = os.path.join(src, name)
+
+ if (os.path.isdir(path) and
+ path.split()[-1] == 'subdir'):
+ res.append(name)
+ elif os.path.splitext(path)[-1] in ('.py'):
+ res.append(name)
+ return res
+
+ shutil.copytree(src_dir, dst_dir, ignore=_filter)
+
+ # checking the result: some elements should not be copied
+ self.assert_(not exists(join(dst_dir, 'test_dir2', 'subdir2',
+ 'test.py')))
+ self.assert_(not exists(join(dst_dir, 'test_dir2', 'subdir')))
+
+ finally:
+ if os.path.exists(dst_dir):
+ shutil.rmtree(dst_dir)
if hasattr(os, "symlink"):
def test_dont_copy_file_onto_link_to_itself(self):
Modified: python/branches/tlee-ast-optimize/Lib/test/test_zipfile.py
==============================================================================
--- python/branches/tlee-ast-optimize/Lib/test/test_zipfile.py (original)
+++ python/branches/tlee-ast-optimize/Lib/test/test_zipfile.py Sat Jul 5 13:12:42 2008
@@ -712,6 +712,54 @@
zipf.writestr("foo.txt\x00qqq", "O, for a Muse of Fire!")
self.assertEqual(zipf.namelist(), ['foo.txt'])
+ def test_StructSizes(self):
+ # check that ZIP internal structure sizes are calculated correctly
+ self.assertEqual(zipfile.sizeEndCentDir, 22)
+ self.assertEqual(zipfile.sizeCentralDir, 46)
+ self.assertEqual(zipfile.sizeEndCentDir64, 56)
+ self.assertEqual(zipfile.sizeEndCentDir64Locator, 20)
+
+ def testComments(self):
+ # This test checks that comments on the archive are handled properly
+
+ # check default comment is empty
+ zipf = zipfile.ZipFile(TESTFN, mode="w")
+ self.assertEqual(zipf.comment, '')
+ zipf.writestr("foo.txt", "O, for a Muse of Fire!")
+ zipf.close()
+ zipfr = zipfile.ZipFile(TESTFN, mode="r")
+ self.assertEqual(zipfr.comment, '')
+ zipfr.close()
+
+ # check a simple short comment
+ comment = 'Bravely taking to his feet, he beat a very brave retreat.'
+ zipf = zipfile.ZipFile(TESTFN, mode="w")
+ zipf.comment = comment
+ zipf.writestr("foo.txt", "O, for a Muse of Fire!")
+ zipf.close()
+ zipfr = zipfile.ZipFile(TESTFN, mode="r")
+ self.assertEqual(zipfr.comment, comment)
+ zipfr.close()
+
+ # check a comment of max length
+ comment2 = ''.join(['%d' % (i**3 % 10) for i in xrange((1 << 16)-1)])
+ zipf = zipfile.ZipFile(TESTFN, mode="w")
+ zipf.comment = comment2
+ zipf.writestr("foo.txt", "O, for a Muse of Fire!")
+ zipf.close()
+ zipfr = zipfile.ZipFile(TESTFN, mode="r")
+ self.assertEqual(zipfr.comment, comment2)
+ zipfr.close()
+
+ # check a comment that is too long is truncated
+ zipf = zipfile.ZipFile(TESTFN, mode="w")
+ zipf.comment = comment2 + 'oops'
+ zipf.writestr("foo.txt", "O, for a Muse of Fire!")
+ zipf.close()
+ zipfr = zipfile.ZipFile(TESTFN, mode="r")
+ self.assertEqual(zipfr.comment, comment2)
+ zipfr.close()
+
def tearDown(self):
support.unlink(TESTFN)
support.unlink(TESTFN2)
Modified: python/branches/tlee-ast-optimize/Lib/test/test_zipfile64.py
==============================================================================
--- python/branches/tlee-ast-optimize/Lib/test/test_zipfile64.py (original)
+++ python/branches/tlee-ast-optimize/Lib/test/test_zipfile64.py Sat Jul 5 13:12:42 2008
@@ -2,6 +2,7 @@
# The test_support.requires call is the only reason for keeping this separate
# from test_zipfile
from test import test_support
+
# XXX(nnorwitz): disable this test by looking for extra largfile resource
# which doesn't exist. This test takes over 30 minutes to run in general
# and requires more disk space than most of the buildbots.
@@ -93,8 +94,31 @@
if os.path.exists(fname):
os.remove(fname)
+
+class OtherTests(unittest.TestCase):
+ def testMoreThan64kFiles(self):
+ # This test checks that more than 64k files can be added to an archive,
+ # and that the resulting archive can be read properly by ZipFile
+ zipf = zipfile.ZipFile(TESTFN, mode="w")
+ zipf.debug = 100
+ numfiles = (1 << 16) * 3/2
+ for i in xrange(numfiles):
+ zipf.writestr("foo%08d" % i, "%d" % (i**3 % 57))
+ self.assertEqual(len(zipf.namelist()), numfiles)
+ zipf.close()
+
+ zipf2 = zipfile.ZipFile(TESTFN, mode="r")
+ self.assertEqual(len(zipf2.namelist()), numfiles)
+ for i in xrange(numfiles):
+ self.assertEqual(zipf2.read("foo%08d" % i), "%d" % (i**3 % 57))
+ zipf.close()
+
+ def tearDown(self):
+ test_support.unlink(TESTFN)
+ test_support.unlink(TESTFN2)
+
def test_main():
- run_unittest(TestsWithSourceFile)
+ run_unittest(TestsWithSourceFile, OtherTests)
if __name__ == "__main__":
test_main()
Modified: python/branches/tlee-ast-optimize/Lib/zipfile.py
==============================================================================
--- python/branches/tlee-ast-optimize/Lib/zipfile.py (original)
+++ python/branches/tlee-ast-optimize/Lib/zipfile.py Sat Jul 5 13:12:42 2008
@@ -27,31 +27,79 @@
error = BadZipfile # The exception raised by this module
ZIP64_LIMIT= (1 << 31) - 1
+ZIP_FILECOUNT_LIMIT = 1 << 16
+ZIP_MAX_COMMENT = (1 << 16) - 1
# constants for Zip file compression methods
ZIP_STORED = 0
ZIP_DEFLATED = 8
# Other ZIP compression methods not supported
-# Here are some struct module formats for reading headers
-structEndArchive = "<4s4H2LH" # 9 items, end of archive, 22 bytes
-stringEndArchive = "PK\005\006" # magic number for end of archive record
-structCentralDir = "<4s4B4HLLL5HLL"# 19 items, central directory, 46 bytes
-stringCentralDir = "PK\001\002" # magic number for central directory
-structFileHeader = "<4s2B4HLLL2H" # 12 items, file header record, 30 bytes
-stringFileHeader = "PK\003\004" # magic number for file header
-structEndArchive64Locator = "<4sLQL" # 4 items, locate Zip64 header, 20 bytes
-stringEndArchive64Locator = "PK\x06\x07" # magic token for locator header
-structEndArchive64 = "<4sQHHLLQQQQ" # 10 items, end of archive (Zip64), 56 bytes
-stringEndArchive64 = "PK\x06\x06" # magic token for Zip64 header
-
+# Below are some formats and associated data for reading/writing headers using
+# the struct module. The names and structures of headers/records are those used
+# in the PKWARE description of the ZIP file format:
+# http://www.pkware.com/documents/casestudies/APPNOTE.TXT
+# (URL valid as of January 2008)
+
+# The "end of central directory" structure, magic number, size, and indices
+# (section V.I in the format document)
+structEndCentDir = "<4s4H2LH"
+magicEndCentDir = "PK\005\006"
+sizeEndCentDir = struct.calcsize(structEndCentDir)
+
+_ECD_SIGNATURE = 0
+_ECD_DISK_NUMBER = 1
+_ECD_DISK_START = 2
+_ECD_ENTRIES_THIS_DISK = 3
+_ECD_ENTRIES_TOTAL = 4
+_ECD_SIZE = 5
+_ECD_OFFSET = 6
+_ECD_COMMENT_SIZE = 7
+# These last two indices are not part of the structure as defined in the
+# spec, but they are used internally by this module as a convenience
+_ECD_COMMENT = 8
+_ECD_LOCATION = 9
+
+# The "central directory" structure, magic number, size, and indices
+# of entries in the structure (section V.F in the format document)
+structCentralDir = "<4s4B4HL2L5H2L"
+magicCentralDir = "PK\001\002"
+sizeCentralDir = struct.calcsize(structCentralDir)
+
+# The "local file header" structure, magic number, size, and indices
+# (section V.A in the format document)
+structFileHeader = "<4s2B4HL2L2H"
+magicFileHeader = "PK\003\004"
+sizeFileHeader = struct.calcsize(structFileHeader)
+
+# The "Zip64 end of central directory locator" structure, magic number, and size
+structEndCentDir64Locator = "<4sLQL"
+magicEndCentDir64Locator = "PK\x06\x07"
+sizeEndCentDir64Locator = struct.calcsize(structEndCentDir64Locator)
+
+# The "Zip64 end of central directory" record, magic number, size, and indices
+# (section V.G in the format document)
+structEndCentDir64 = "<4sQ2H2L4Q"
+magicEndCentDir64 = "PK\x06\x06"
+sizeEndCentDir64 = struct.calcsize(structEndCentDir64)
+
+_CD64_SIGNATURE = 0
+_CD64_DIRECTORY_RECSIZE = 1
+_CD64_CREATE_VERSION = 2
+_CD64_EXTRACT_VERSION = 3
+_CD64_DISK_NUMBER = 4
+_CD64_DISK_NUMBER_START = 5
+_CD64_NUMBER_ENTRIES_THIS_DISK = 6
+_CD64_NUMBER_ENTRIES_TOTAL = 7
+_CD64_DIRECTORY_SIZE = 8
+_CD64_OFFSET_START_CENTDIR = 9
# indexes of entries in the central directory structure
_CD_SIGNATURE = 0
_CD_CREATE_VERSION = 1
_CD_CREATE_SYSTEM = 2
_CD_EXTRACT_VERSION = 3
-_CD_EXTRACT_SYSTEM = 4 # is this meaningful?
+_CD_EXTRACT_SYSTEM = 4
_CD_FLAG_BITS = 5
_CD_COMPRESS_TYPE = 6
_CD_TIME = 7
@@ -67,10 +115,15 @@
_CD_EXTERNAL_FILE_ATTRIBUTES = 17
_CD_LOCAL_HEADER_OFFSET = 18
-# indexes of entries in the local file header structure
+# The "local file header" structure, magic number, size, and indices
+# (section V.A in the format document)
+structFileHeader = "<4s2B4HL2L2H"
+magicFileHeader = "PK\003\004"
+sizeFileHeader = struct.calcsize(structFileHeader)
+
_FH_SIGNATURE = 0
_FH_EXTRACT_VERSION = 1
-_FH_EXTRACT_SYSTEM = 2 # is this meaningful?
+_FH_EXTRACT_SYSTEM = 2
_FH_GENERAL_PURPOSE_FLAG_BITS = 3
_FH_COMPRESSION_METHOD = 4
_FH_LAST_MOD_TIME = 5
@@ -81,6 +134,28 @@
_FH_FILENAME_LENGTH = 10
_FH_EXTRA_FIELD_LENGTH = 11
+# The "Zip64 end of central directory locator" structure, magic number, and size
+structEndCentDir64Locator = "<4sLQL"
+magicEndCentDir64Locator = "PK\x06\x07"
+sizeEndCentDir64Locator = struct.calcsize(structEndCentDir64Locator)
+
+# The "Zip64 end of central directory" record, magic number, size, and indices
+# (section V.G in the format document)
+structEndCentDir64 = "<4sQ2H2L4Q"
+magicEndCentDir64 = "PK\x06\x06"
+sizeEndCentDir64 = struct.calcsize(structEndCentDir64)
+
+_CD64_SIGNATURE = 0
+_CD64_DIRECTORY_RECSIZE = 1
+_CD64_CREATE_VERSION = 2
+_CD64_EXTRACT_VERSION = 3
+_CD64_DISK_NUMBER = 4
+_CD64_DISK_NUMBER_START = 5
+_CD64_NUMBER_ENTRIES_THIS_DISK = 6
+_CD64_NUMBER_ENTRIES_TOTAL = 7
+_CD64_DIRECTORY_SIZE = 8
+_CD64_OFFSET_START_CENTDIR = 9
+
def is_zipfile(filename):
"""Quickly see if file is a ZIP file by checking the magic number."""
try:
@@ -97,33 +172,31 @@
"""
Read the ZIP64 end-of-archive records and use that to update endrec
"""
- locatorSize = struct.calcsize(structEndArchive64Locator)
- fpin.seek(offset - locatorSize, 2)
- data = fpin.read(locatorSize)
- sig, diskno, reloff, disks = struct.unpack(structEndArchive64Locator, data)
- if sig != stringEndArchive64Locator:
+ fpin.seek(offset - sizeEndCentDir64Locator, 2)
+ data = fpin.read(sizeEndCentDir64Locator)
+ sig, diskno, reloff, disks = struct.unpack(structEndCentDir64Locator, data)
+ if sig != magicEndCentDir64Locator:
return endrec
if diskno != 0 or disks != 1:
raise BadZipfile("zipfiles that span multiple disks are not supported")
# Assume no 'zip64 extensible data'
- endArchiveSize = struct.calcsize(structEndArchive64)
- fpin.seek(offset - locatorSize - endArchiveSize, 2)
- data = fpin.read(endArchiveSize)
+ fpin.seek(offset - sizeEndCentDir64Locator - sizeEndCentDir64, 2)
+ data = fpin.read(sizeEndCentDir64)
sig, sz, create_version, read_version, disk_num, disk_dir, \
dircount, dircount2, dirsize, diroffset = \
- struct.unpack(structEndArchive64, data)
- if sig != stringEndArchive64:
+ struct.unpack(structEndCentDir64, data)
+ if sig != magicEndCentDir64:
return endrec
# Update the original endrec using data from the ZIP64 record
- endrec[1] = disk_num
- endrec[2] = disk_dir
- endrec[3] = dircount
- endrec[4] = dircount2
- endrec[5] = dirsize
- endrec[6] = diroffset
+ endrec[_ECD_DISK_NUMBER] = disk_num
+ endrec[_ECD_DISK_START] = disk_dir
+ endrec[_ECD_ENTRIES_THIS_DISK] = dircount
+ endrec[_ECD_ENTRIES_TOTAL] = dircount2
+ endrec[_ECD_SIZE] = dirsize
+ endrec[_ECD_OFFSET] = diroffset
return endrec
@@ -132,38 +205,59 @@
The data is a list of the nine items in the ZIP "End of central dir"
record followed by a tenth item, the file seek offset of this record."""
- fpin.seek(-22, 2) # Assume no archive comment.
- filesize = fpin.tell() + 22 # Get file size
+
+ # Determine file size
+ fpin.seek(0, 2)
+ filesize = fpin.tell()
+
+ # Check to see if this is ZIP file with no archive comment (the
+ # "end of central directory" structure should be the last item in the
+ # file if this is the case).
+ fpin.seek(-sizeEndCentDir, 2)
data = fpin.read()
- if data[0:4] == stringEndArchive and data[-2:] == "\000\000":
- endrec = struct.unpack(structEndArchive, data)
- endrec = list(endrec)
- endrec.append("") # Append the archive comment
- endrec.append(filesize - 22) # Append the record start offset
- if endrec[-4] == 0xffffffff:
- return _EndRecData64(fpin, -22, endrec)
+ if data[0:4] == magicEndCentDir and data[-2:] == "\000\000":
+ # the signature is correct and there's no comment, unpack structure
+ endrec = struct.unpack(structEndCentDir, data)
+ endrec=list(endrec)
+
+ # Append a blank comment and record start offset
+ endrec.append("")
+ endrec.append(filesize - sizeEndCentDir)
+ if endrec[_ECD_OFFSET] == 0xffffffff:
+ # the value for the "offset of the start of the central directory"
+ # indicates that there is a "Zip64 end of central directory"
+ # structure present, so go look for it
+ return _EndRecData64(fpin, -sizeEndCentDir, endrec)
+
return endrec
- # Search the last END_BLOCK bytes of the file for the record signature.
- # The comment is appended to the ZIP file and has a 16 bit length.
- # So the comment may be up to 64K long. We limit the search for the
- # signature to a few Kbytes at the end of the file for efficiency.
- # also, the signature must not appear in the comment.
- END_BLOCK = min(filesize, 1024 * 4)
- fpin.seek(filesize - END_BLOCK, 0)
+
+ # Either this is not a ZIP file, or it is a ZIP file with an archive
+ # comment. Search the end of the file for the "end of central directory"
+ # record signature. The comment is the last item in the ZIP file and may be
+ # up to 64K long. It is assumed that the "end of central directory" magic
+ # number does not appear in the comment.
+ maxCommentStart = max(filesize - (1 << 16) - sizeEndCentDir, 0)
+ fpin.seek(maxCommentStart, 0)
data = fpin.read()
- start = data.rfind(stringEndArchive)
- if start >= 0: # Correct signature string was found
- endrec = struct.unpack(structEndArchive, data[start:start+22])
- endrec = list(endrec)
- comment = data[start+22:]
- if endrec[7] == len(comment): # Comment length checks out
+ start = data.rfind(magicEndCentDir)
+ if start >= 0:
+ # found the magic number; attempt to unpack and interpret
+ recData = data[start:start+sizeEndCentDir]
+ endrec = list(struct.unpack(structEndCentDir, recData))
+ comment = data[start+sizeEndCentDir:]
+ # check that comment length is correct
+ if endrec[_ECD_COMMENT_SIZE] == len(comment):
# Append the archive comment and start offset
endrec.append(comment)
- endrec.append(filesize - END_BLOCK + start)
- if endrec[-4] == 0xffffffff:
- return _EndRecData64(fpin, - END_BLOCK + start, endrec)
+ endrec.append(maxCommentStart + start)
+ if endrec[_ECD_OFFSET] == 0xffffffff:
+ # There is apparently a "Zip64 end of central directory"
+ # structure present, so go look for it
+ return _EndRecData64(fpin, start - filesize, endrec)
return endrec
- return # Error, return None
+
+ # Unable to find a valid end of central directory structure
+ return
class ZipInfo (object):
@@ -250,13 +344,13 @@
fmt = '<HHQQ'
extra = extra + struct.pack(fmt,
1, struct.calcsize(fmt)-4, file_size, compress_size)
- file_size = 0xffffffff # -1
- compress_size = 0xffffffff # -1
+ file_size = 0xffffffff
+ compress_size = 0xffffffff
self.extract_version = max(45, self.extract_version)
self.create_version = max(45, self.extract_version)
filename, flag_bits = self._encodeFilenameFlags()
- header = struct.pack(structFileHeader, stringFileHeader,
+ header = struct.pack(structFileHeader, magicFileHeader,
self.extract_version, self.reserved, flag_bits,
self.compress_type, dostime, dosdate, CRC,
compress_size, file_size,
@@ -299,16 +393,15 @@
idx = 0
# ZIP64 extension (large files and/or large archives)
- # XXX Is this correct? won't this exclude 2**32-1 byte files?
if self.file_size in (0xffffffffffffffffL, 0xffffffffL):
self.file_size = counts[idx]
idx += 1
- if self.compress_size == -1 or self.compress_size == 0xFFFFFFFFL:
+ if self.compress_size == 0xFFFFFFFFL:
self.compress_size = counts[idx]
idx += 1
- if self.header_offset == -1 or self.header_offset == 0xffffffffL:
+ if self.header_offset == 0xffffffffL:
old = self.header_offset
self.header_offset = counts[idx]
idx+=1
@@ -572,7 +665,7 @@
class ZipFile:
""" Class with methods to open, read, write, close, list zip files.
- z = ZipFile(file, mode="r", compression=ZIP_STORED, allowZip64=True)
+ z = ZipFile(file, mode="r", compression=ZIP_STORED, allowZip64=False)
file: Either the path to the file, or a file-like object.
If it is a path, the file will be opened and closed by ZipFile.
@@ -608,6 +701,7 @@
self.compression = compression # Method of compression
self.mode = key = mode.replace('b', '')[0]
self.pwd = None
+ self.comment = ''
# Check if we were passed a file-like object
if isinstance(file, basestring):
@@ -663,18 +757,20 @@
raise BadZipfile, "File is not a zip file"
if self.debug > 1:
print endrec
- size_cd = endrec[5] # bytes in central directory
- offset_cd = endrec[6] # offset of central directory
- self.comment = endrec[8] # archive comment
- # endrec[9] is the offset of the "End of Central Dir" record
- if endrec[9] > ZIP64_LIMIT:
- x = endrec[9] - size_cd - 56 - 20
- else:
- x = endrec[9] - size_cd
+ size_cd = endrec[_ECD_SIZE] # bytes in central directory
+ offset_cd = endrec[_ECD_OFFSET] # offset of central directory
+ self.comment = endrec[_ECD_COMMENT] # archive comment
+
# "concat" is zero, unless zip was concatenated to another file
- concat = x - offset_cd
+ concat = endrec[_ECD_LOCATION] - size_cd - offset_cd
+ if endrec[_ECD_LOCATION] > ZIP64_LIMIT:
+ # If the offset of the "End of Central Dir" record requires Zip64
+ # extension structures, account for them
+ concat -= (sizeEndCentDir64 + sizeEndCentDir64Locator)
+
if self.debug > 2:
- print "given, inferred, offset", offset_cd, x, concat
+ inferred = concat + offset_cd
+ print "given, inferred, offset", offset_cd, inferred, concat
# self.start_dir: Position of start of central directory
self.start_dir = offset_cd + concat
fp.seek(self.start_dir, 0)
@@ -682,9 +778,8 @@
fp = cStringIO.StringIO(data)
total = 0
while total < size_cd:
- centdir = fp.read(46)
- total = total + 46
- if centdir[0:4] != stringCentralDir:
+ centdir = fp.read(sizeCentralDir)
+ if centdir[0:4] != magicCentralDir:
raise BadZipfile, "Bad magic number for central directory"
centdir = struct.unpack(structCentralDir, centdir)
if self.debug > 2:
@@ -694,9 +789,6 @@
x = ZipInfo(filename)
x.extra = fp.read(centdir[_CD_EXTRA_FIELD_LENGTH])
x.comment = fp.read(centdir[_CD_COMMENT_LENGTH])
- total = (total + centdir[_CD_FILENAME_LENGTH]
- + centdir[_CD_EXTRA_FIELD_LENGTH]
- + centdir[_CD_COMMENT_LENGTH])
x.header_offset = centdir[_CD_LOCAL_HEADER_OFFSET]
(x.create_version, x.create_system, x.extract_version, x.reserved,
x.flag_bits, x.compress_type, t, d,
@@ -712,6 +804,12 @@
x.filename = x._decodeFilename()
self.filelist.append(x)
self.NameToInfo[x.filename] = x
+
+ # update total bytes read from central directory
+ total = (total + sizeCentralDir + centdir[_CD_FILENAME_LENGTH]
+ + centdir[_CD_EXTRA_FIELD_LENGTH]
+ + centdir[_CD_COMMENT_LENGTH])
+
if self.debug > 2:
print "total", total
@@ -743,7 +841,6 @@
except BadZipfile:
return zinfo.filename
-
def getinfo(self, name):
"""Return the instance of ZipInfo given 'name'."""
info = self.NameToInfo.get(name)
@@ -787,8 +884,8 @@
zef_file.seek(zinfo.header_offset, 0)
# Skip the file header:
- fheader = zef_file.read(30)
- if fheader[0:4] != stringFileHeader:
+ fheader = zef_file.read(sizeFileHeader)
+ if fheader[0:4] != magicFileHeader:
raise BadZipfile, "Bad magic number for file header"
fheader = struct.unpack(structFileHeader, fheader)
@@ -1048,15 +1145,15 @@
or zinfo.compress_size > ZIP64_LIMIT:
extra.append(zinfo.file_size)
extra.append(zinfo.compress_size)
- file_size = 0xffffffff #-1
- compress_size = 0xffffffff #-1
+ file_size = 0xffffffff
+ compress_size = 0xffffffff
else:
file_size = zinfo.file_size
compress_size = zinfo.compress_size
if zinfo.header_offset > ZIP64_LIMIT:
extra.append(zinfo.header_offset)
- header_offset = 0xffffffffL # -1 32 bit
+ header_offset = 0xffffffffL
else:
header_offset = zinfo.header_offset
@@ -1076,7 +1173,7 @@
try:
filename, flag_bits = zinfo._encodeFilenameFlags()
centdir = struct.pack(structCentralDir,
- stringCentralDir, create_version,
+ magicCentralDir, create_version,
zinfo.create_system, extract_version, zinfo.reserved,
flag_bits, zinfo.compress_type, dostime, dosdate,
zinfo.CRC, compress_size, file_size,
@@ -1100,27 +1197,35 @@
pos2 = self.fp.tell()
# Write end-of-zip-archive record
+ centDirOffset = pos1
if pos1 > ZIP64_LIMIT:
# Need to write the ZIP64 end-of-archive records
zip64endrec = struct.pack(
- structEndArchive64, stringEndArchive64,
+ structEndCentDir64, magicEndCentDir64,
44, 45, 45, 0, 0, count, count, pos2 - pos1, pos1)
self.fp.write(zip64endrec)
zip64locrec = struct.pack(
- structEndArchive64Locator,
- stringEndArchive64Locator, 0, pos2, 1)
+ structEndCentDir64Locator,
+ magicEndCentDir64Locator, 0, pos2, 1)
self.fp.write(zip64locrec)
+ centDirOffset = 0xFFFFFFFF
- endrec = struct.pack(structEndArchive, stringEndArchive,
- 0, 0, count, count, pos2 - pos1, 0xffffffffL, 0)
- self.fp.write(endrec)
-
- else:
- endrec = struct.pack(structEndArchive, stringEndArchive,
- 0, 0, count, count, pos2 - pos1, pos1, 0)
- self.fp.write(endrec)
+ # check for valid comment length
+ if len(self.comment) >= ZIP_MAX_COMMENT:
+ if self.debug > 0:
+ msg = 'Archive comment is too long; truncating to %d bytes' \
+ % ZIP_MAX_COMMENT
+ self.comment = self.comment[:ZIP_MAX_COMMENT]
+
+ endrec = struct.pack(structEndCentDir, magicEndCentDir,
+ 0, 0, count % ZIP_FILECOUNT_LIMIT,
+ count % ZIP_FILECOUNT_LIMIT, pos2 - pos1,
+ centDirOffset, len(self.comment))
+ self.fp.write(endrec)
+ self.fp.write(self.comment)
self.fp.flush()
+
if not self._filePassed:
self.fp.close()
self.fp = None
Modified: python/branches/tlee-ast-optimize/Misc/NEWS
==============================================================================
--- python/branches/tlee-ast-optimize/Misc/NEWS (original)
+++ python/branches/tlee-ast-optimize/Misc/NEWS Sat Jul 5 13:12:42 2008
@@ -29,10 +29,25 @@
would not cause a syntax error. This was regression from 2.4 caused by the
switch to the new compiler.
-
Library
-------
+- Issue #2663: add filtering capability to shutil.copytree().
+
+- Issue #1622: Correct interpretation of various ZIP header fields.
+
+- Issue #1526: Allow more than 64k files to be added to Zip64 file.
+
+- Issue #1746: Correct handling of zipfile archive comments (previously
+ archives with comments over 4k were flagged as invalid). Allow writing
+ Zip files with archives by setting the 'comment' attribute of a ZipFile.
+
+- Issue #449227: Now with the rlcompleter module, callable objects are added
+ "(" when completed.
+
+- Issue #3190: Pydoc now hides the automatic module attribute __package__ (the
+ handling is now the same as that of other special attributes like __name__).
+
- Issue #2885 (partial): The urllib.urlopen() function has been deprecated for
removal in Python 3.0 in favor of urllib2.urlopen().
@@ -40,7 +55,6 @@
urllib module in Python 3.0 to urllib.request, urllib.parse, and
urllib.error.
-
Build
-----
Modified: python/branches/tlee-ast-optimize/Modules/nismodule.c
==============================================================================
--- python/branches/tlee-ast-optimize/Modules/nismodule.c (original)
+++ python/branches/tlee-ast-optimize/Modules/nismodule.c Sat Jul 5 13:12:42 2008
@@ -98,6 +98,7 @@
struct ypcallback_data {
PyObject *dict;
int fix;
+ PyThreadState *state;
};
static int
@@ -109,6 +110,7 @@
PyObject *val;
int err;
+ PyEval_RestoreThread(indata->state);
if (indata->fix) {
if (inkeylen > 0 && inkey[inkeylen-1] == '\0')
inkeylen--;
@@ -127,10 +129,11 @@
err = PyDict_SetItem(indata->dict, key, val);
Py_DECREF(key);
Py_DECREF(val);
- if (err != 0) {
+ if (err != 0)
PyErr_Clear();
- return 1;
- }
+ indata->state = PyEval_SaveThread();
+ if (err != 0)
+ return 1;
return 0;
}
return 1;
@@ -206,9 +209,9 @@
data.dict = dict;
map = nis_mapname (map, &data.fix);
cb.data = (char *)&data;
- Py_BEGIN_ALLOW_THREADS
+ data.state = PyEval_SaveThread();
err = yp_all (domain, map, &cb);
- Py_END_ALLOW_THREADS
+ PyEval_RestoreThread(data.state);
if (err != 0) {
Py_DECREF(dict);
return nis_error(err);
Modified: python/branches/tlee-ast-optimize/Python/ceval.c
==============================================================================
--- python/branches/tlee-ast-optimize/Python/ceval.c (original)
+++ python/branches/tlee-ast-optimize/Python/ceval.c Sat Jul 5 13:12:42 2008
@@ -615,18 +615,20 @@
COMPARE_OP is often followed by JUMP_IF_FALSE or JUMP_IF_TRUE. And,
those opcodes are often followed by a POP_TOP.
- Verifying the prediction costs a single high-speed test of register
+ Verifying the prediction costs a single high-speed test of a register
variable against a constant. If the pairing was good, then the
- processor has a high likelihood of making its own successful branch
- prediction which results in a nearly zero overhead transition to the
- next opcode.
-
- A successful prediction saves a trip through the eval-loop including
- its two unpredictable branches, the HAS_ARG test and the switch-case.
-
- If collecting opcode statistics, turn off prediction so that
- statistics are accurately maintained (the predictions bypass
- the opcode frequency counter updates).
+ processor's own internal branch predication has a high likelihood of
+ success, resulting in a nearly zero-overhead transition to the
+ next opcode. A successful prediction saves a trip through the eval-loop
+ including its two unpredictable branches, the HAS_ARG test and the
+ switch-case. Combined with the processor's internal branch prediction,
+ a successful PREDICT has the effect of making the two opcodes run as if
+ they were a single new opcode with the bodies combined.
+
+ If collecting opcode statistics, your choices are to either keep the
+ predictions turned-on and interpret the results as if some opcodes
+ had been combined or turn-off predictions so that the opcode frequency
+ counter updates for both opcodes.
*/
#ifdef DYNAMIC_EXECUTION_PROFILE
Modified: python/branches/tlee-ast-optimize/Python/pythonrun.c
==============================================================================
--- python/branches/tlee-ast-optimize/Python/pythonrun.c (original)
+++ python/branches/tlee-ast-optimize/Python/pythonrun.c Sat Jul 5 13:12:42 2008
@@ -231,14 +231,14 @@
if (install_sigs)
initsigs(); /* Signal handling stuff, including initintr() */
- /* Initialize warnings. */
- _PyWarnings_Init();
- if (PySys_HasWarnOptions()) {
- PyObject *warnings_module = PyImport_ImportModule("warnings");
- if (!warnings_module)
- PyErr_Clear();
- Py_XDECREF(warnings_module);
- }
+ /* Initialize warnings. */
+ _PyWarnings_Init();
+ if (PySys_HasWarnOptions()) {
+ PyObject *warnings_module = PyImport_ImportModule("warnings");
+ if (!warnings_module)
+ PyErr_Clear();
+ Py_XDECREF(warnings_module);
+ }
initmain(); /* Module __main__ */
if (!Py_NoSiteFlag)
@@ -1128,7 +1128,7 @@
PyErr_NormalizeException(&exception, &v, &tb);
if (exception == NULL)
return;
- /* Now we know v != NULL too */
+ /* Now we know v != NULL too */
if (set_sys_last_vars) {
PySys_SetObject("last_type", exception);
PySys_SetObject("last_value", v);
@@ -1903,14 +1903,14 @@
PyAPI_FUNC(PyObject *)
PyRun_File(FILE *fp, const char *p, int s, PyObject *g, PyObject *l)
{
- return PyRun_FileExFlags(fp, p, s, g, l, 0, NULL);
+ return PyRun_FileExFlags(fp, p, s, g, l, 0, NULL);
}
#undef PyRun_FileEx
PyAPI_FUNC(PyObject *)
PyRun_FileEx(FILE *fp, const char *p, int s, PyObject *g, PyObject *l, int c)
{
- return PyRun_FileExFlags(fp, p, s, g, l, c, NULL);
+ return PyRun_FileExFlags(fp, p, s, g, l, c, NULL);
}
#undef PyRun_FileFlags
@@ -1918,7 +1918,7 @@
PyRun_FileFlags(FILE *fp, const char *p, int s, PyObject *g, PyObject *l,
PyCompilerFlags *flags)
{
- return PyRun_FileExFlags(fp, p, s, g, l, 0, flags);
+ return PyRun_FileExFlags(fp, p, s, g, l, 0, flags);
}
#undef PyRun_SimpleFile
More information about the Python-checkins
mailing list