From commits-noreply at bitbucket.org Mon Nov 2 13:01:43 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Mon, 2 Nov 2009 12:01:43 +0000 (UTC) Subject: [py-svn] py-trunk commit e64e3c434127: update and fix docs for installation Message-ID: <20091102120143.995857EFF6@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1257163248 -3600 # Node ID e64e3c4341279dc9377fec0676ef33c911e9ccb1 # Parent 15fcc2ee86797cf1ac3666a8dc4ec5bd89909d2a update and fix docs for installation - rework installation - add a new FAQ entry related to issue58 Windows/setuptools/multiprocess - strike api/source references --- a/doc/download.txt +++ /dev/null @@ -1,123 +0,0 @@ -.. - ============== - Downloading - ============== - - .. _`PyPI project page`: http://pypi.python.org/pypi/py/ - - Latest Release, see `PyPI project page`_ - -using easy_install -=================================================== - -With a working `setuptools installation`_ or `distribute installation`_ -you can type:: - - easy_install -U py - -to get the latest release of the py lib. The ``-U`` switch -will trigger an upgrade if you already have an older version installed. -On Linux systems you may need to execute the command as superuser and -on Windows you might need to write down the full path to ``easy_install``. -The py lib and its tools are expected to work well on Linux, -Windows and OSX, Python versions 2.4, 2.5, 2.6 through to -the Python3 versions 3.0 and 3.1. - -.. _mercurial: http://mercurial.selenic.com/wiki/ -.. _`distribute installation`: http://pypi.python.org/pypi/distribute -.. _checkout: -.. _tarball: - -Working from version control or a tarball -================================================= - -To follow development or help with fixing things -for the next release, checkout the complete code -and documentation source with mercurial_:: - - hg clone https://bitbucket.org/hpk42/py-trunk/ - -This currrently contains a 1.0.x branch and the -default 'trunk' branch where mainline development -takes place. There also is a readonly subversion -checkout available:: - - svn co https://codespeak.net/svn/py/dist - -You can also go to the python package index and -download and unpack a TAR file:: - - http://pypi.python.org/pypi/py/ - -activating checkout with setuptools --------------------------------------------- - -With a working `setuptools installation`_ you can issue:: - - python setup.py develop - -in order to work with the tools and the lib of your checkout. - -.. _`no-setuptools`: - -activating a checkout or tarball without setuptools -------------------------------------------------------------- - -To import the py lib the ``py`` package directory needs to -be on the ``$PYTHONPATH``. If you exexute scripts directly -from ``py/bin/`` or ``py\bin\win32`` they will find their -containing py lib automatically. - -It is usually a good idea to add the parent directory of the ``py`` package -directory to your ``PYTHONPATH`` and ``py/bin`` or ``py\bin\win32`` to your -system wide ``PATH`` settings. There are helper scripts that set ``PYTHONPATH`` and ``PATH`` on your system: - -on windows execute:: - - # inside autoexec.bat or shell startup - c:\\path\to\checkout\py\bin\env.cmd - -on linux/OSX add this to your shell initialization:: - - # inside .bashrc - eval `python ~/path/to/checkout/py/bin/env.py` - -both of which which will get you good settings -for ``PYTHONPATH`` and ``PATH``. - - -note: scripts look for "nearby" py-lib ------------------------------------------------------ - -Note that the `command line scripts`_ will look -for "nearby" py libs, so if you have a layout like this:: - - mypkg/ - subpkg1/ - tests/ - tests/ - py/ - -issuing ``py.test subpkg1`` will use the py lib -from that projects root directory. - -.. _`command line scripts`: bin.html - -Debian and RPM packages -=================================== - -As of August 2009 pytest/pylib 1.0 RPMs and Debian packages -are not available. You will only find 0.9 versions - -on Debian systems look for ``python-codespeak-lib`` -and Dwayne Bailey has put together a Fedora `RPM`_. - -If you can help with providing/upgrading distribution -packages please use of the contact_ channels in case -of questions or need for changes. - -.. _contact: contact.html - -.. _`RPM`: http://translate.sourceforge.net/releases/testing/fedora/pylib-0.9.2-1.fc9.noarch.rpm - -.. _`setuptools installation`: http://pypi.python.org/pypi/setuptools - --- a/doc/test/quickstart.txt +++ b/doc/test/quickstart.txt @@ -5,16 +5,12 @@ Quickstart ================== -.. _here: ../download.html#no-setuptools +.. _here: ../install.html - -With a `setuptools installation`_ (otherwise see here_) you can type:: +If you have a version of ``easy_install`` (otherwise see here_) just type:: easy_install -U py -On Linux systems you may need to execute this as the superuser and -on Windows you might need to write down the full path to ``easy_install``. - Now create a file ``test_sample.py`` with the following content: .. sourcecode:: python @@ -63,7 +59,7 @@ a progress report and important details .. _`contact`: ../contact.html .. _`automatically collected`: features.html#autocollect -.. _download: ../download.html +.. _install: ../install.html .. _features: features.html .. _tutorials: talks.html --- a/doc/faq.txt +++ b/doc/faq.txt @@ -120,3 +120,29 @@ and implement the `parametrization schem .. _`pytest_generate_tests`: test/funcargs.html#parametrizing-tests .. _`parametrization scheme of your choice`: http://tetamap.wordpress.com/2009/05/13/parametrizing-python-tests-generalized/ + + +py.test interaction with other packages +=============================================== + +What's up with multiprocess on Windows? +------------------------------------------------------------ + +On windows the multiprocess package will instantiate sub processes +by pickling and thus implicitely re-import a lot of local modules. +Unfortuantely, setuptools-0.6.11 does not ``if __name__=='__main__'`` +protect its generated command line script. This leads to infinite +recursion when running a test that instantiates Processes. +There are two workarounds: + +* `install Distribute`_ as a drop-in replacement for setuptools + and re-install py.test + +* `directly use a checkout`_ which avoids all setuptools/Distribute + installation + +.. _`directly use a checkout`: install.html#directly-use-a-checkout + +.. _`install distribute`: http://pypi.python.org/pypi/distribute#installation-instructions + + --- a/doc/code.txt +++ b/doc/code.txt @@ -2,35 +2,33 @@ py.code: higher level python code and introspection objects ================================================================================ -The :api:`py.code` part of the pylib contains some functionality to help +The ``py.code`` part of the pylib contains some functionality to help dealing with Python code objects. Even though working with Python's internal code objects (as found on frames and callables) can be very powerful, it's usually also quite cumbersome, because the API provided by core Python is relatively low level and not very accessible. -The :api:`py.code` library tries to simplify accessing the code objects as well +The ``py.code`` library tries to simplify accessing the code objects as well as creating them. There is a small set of interfaces a user needs to deal with, all nicely bundled together, and with a rich set of 'Pythonic' functionality. -source: :source:`py/code/` - Contents of the library ======================= -Every object in the :api:`py.code` library wraps a code Python object related -to code objects, source code, frames and tracebacks: the :api:`py.code.Code` -class wraps code objects, :api:`py.code.Source` source snippets, -:api:`py.code.Traceback` exception tracebacks, :api:`py.code.Frame` frame -objects (as found in e.g. tracebacks) and :api:`py.code.ExceptionInfo` the +Every object in the ``py.code`` library wraps a code Python object related +to code objects, source code, frames and tracebacks: the ``py.code.Code`` +class wraps code objects, ``py.code.Source`` source snippets, +``py.code.Traceback` exception tracebacks, :api:`py.code.Frame`` frame +objects (as found in e.g. tracebacks) and ``py.code.ExceptionInfo`` the tuple provided by sys.exc_info() (containing exception and traceback information when an exception occurs). Also in the library is a helper function -:api:`py.code.compile()` that provides the same functionality as Python's +``py.code.compile()`` that provides the same functionality as Python's built-in 'compile()' function, but returns a wrapped code object. The wrappers ============ -:api:`py.code.Code` +``py.code.Code`` ------------------- Code objects are instantiated with a code object or a callable as argument, @@ -48,9 +46,7 @@ A quick example:: >>> str(c.source()).split('\n')[0] "def read(self, mode='r'):" -source: :source:`py/code/code.py` - -:api:`py.code.Source` +``py.code.Source`` --------------------- Source objects wrap snippets of Python source code, providing a simple yet @@ -71,9 +67,8 @@ Example:: >>> str(sub).strip() # XXX why is the strip() required?!? 'print "foo"' -source: :source:`py/code/source.py` -:api:`py.code.Traceback` +``py.code.Traceback`` ------------------------ Tracebacks are usually not very easy to examine, you need to access certain @@ -97,15 +92,13 @@ Example:: >>> str(first.statement).strip().startswith('raise ValueError') True -source: :source:`py/code/code.py` - -:api:`py.code.Frame` +``py.code.Frame`` -------------------- -Frame wrappers are used in :api:`py.code.Traceback` items, and will usually not +Frame wrappers are used in ``py.code.Traceback`` items, and will usually not directly be instantiated. They provide some nice methods to evaluate code 'inside' the frame (using the frame's local variables), get to the underlying -code (frames have a code attribute that points to a :api:`py.code.Code` object) +code (frames have a code attribute that points to a ``py.code.Code`` object) and examine the arguments. Example (using the 'first' TracebackItem instance created above):: @@ -118,7 +111,7 @@ Example (using the 'first' TracebackItem >>> [namevalue[0] for namevalue in frame.getargs()] ['cls', 'path'] -:api:`py.code.ExceptionInfo` +``py.code.ExceptionInfo`` ---------------------------- A wrapper around the tuple returned by sys.exc_info() (will call sys.exc_info() --- a/doc/path.txt +++ b/doc/path.txt @@ -3,17 +3,17 @@ py.path ======= The 'py' lib provides a uniform high-level api to deal with filesystems -and filesystem-like interfaces: :api:`py.path`. It aims to offer a central +and filesystem-like interfaces: ``py.path``. It aims to offer a central object to fs-like object trees (reading from and writing to files, adding files/directories, examining the types and structure, etc.), and out-of-the-box provides a number of implementations of this API. -Path implementations provided by :api:`py.path` +Path implementations provided by ``py.path`` =============================================== .. _`local`: -:api:`py.path.local` +``py.path.local`` -------------------- The first and most obvious of the implementations is a wrapper around a local @@ -21,8 +21,8 @@ filesystem. It's just a bit nicer in usa of course all the functionality is bundled together rather than spread over a number of modules. -Example usage, here we use the :api:`py.test.ensuretemp()` function to create -a :api:`py.path.local` object for us (which wraps a directory): +Example usage, here we use the ``py.test.ensuretemp()`` function to create +a ``py.path.local`` object for us (which wraps a directory): .. sourcecode:: pycon @@ -40,17 +40,17 @@ a :api:`py.path.local` object for us (wh >>> foofile.read(1) 'b' -:api:`py.path.svnurl` and :api:`py.path.svnwc` +``py.path.svnurl` and :api:`py.path.svnwc`` ---------------------------------------------- -Two other :api:`py.path` implementations that the py lib provides wrap the +Two other ``py.path`` implementations that the py lib provides wrap the popular `Subversion`_ revision control system: the first (called 'svnurl') by interfacing with a remote server, the second by wrapping a local checkout. Both allow you to access relatively advanced features such as metadata and versioning, and both in a way more user-friendly manner than existing other solutions. -Some example usage of :api:`py.path.svnurl`: +Some example usage of ``py.path.svnurl``: .. sourcecode:: pycon @@ -65,7 +65,7 @@ Some example usage of :api:`py.path.svnu >>> time.strftime('%Y-%m-%d', time.gmtime(firstentry.date)) '2004-10-02' -Example usage of :api:`py.path.svnwc`: +Example usage of ``py.path.svnwc``: .. sourcecode:: pycon @@ -125,7 +125,7 @@ specific directory. Working with Paths ....................... -This example shows the :api:`py.path` features to deal with +This example shows the ``py.path`` features to deal with filesystem paths Note that the filesystem is never touched, all operations are performed on a string level (so the paths don't have to exist, either): @@ -154,7 +154,7 @@ don't have to exist, either): >>> p4.purebasename == "bar" True -This should be possible on every implementation of :api:`py.path`, so +This should be possible on every implementation of ``py.path``, so regardless of whether the implementation wraps a UNIX filesystem, a Windows one, or a database or object tree, these functions should be available (each with their own notion of path seperators and dealing with conversions, etc.). @@ -189,7 +189,7 @@ Setting svn-properties ....................... As an example of 'uncommon' methods, we'll show how to read and write -properties in an :api:`py.path.svnwc` instance: +properties in an ``py.path.svnwc`` instance: .. sourcecode:: pycon @@ -254,7 +254,7 @@ to provide this choice (and getting rid of platform-dependencies as much as possible). There is some experimental small approach -(:source:`py/path/gateway/`) aiming at having +(``py/path/gateway/``) aiming at having a convenient Remote Path implementation. There are various hacks out there to have --- a/doc/test/plugin/links.txt +++ b/doc/test/plugin/links.txt @@ -16,7 +16,7 @@ .. _`pytest_figleaf.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_figleaf.py .. _`pytest_hooklog.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_hooklog.py .. _`pytest_skipping.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_skipping.py -.. _`checkout the py.test development version`: ../../download.html#checkout +.. _`checkout the py.test development version`: ../../install.html#checkout .. _`pytest_helpconfig.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_helpconfig.py .. _`oejskit`: oejskit.html .. _`doctest`: doctest.html --- /dev/null +++ b/doc/install.txt @@ -0,0 +1,130 @@ +.. + ============== + Downloading + ============== + + .. _`PyPI project page`: http://pypi.python.org/pypi/py/ + + Latest Release, see `PyPI project page`_ + +using easy_install (via Distribute or setuptools) +=================================================== + +It is recommended to use `Distribute for installation`_ as a drop-in +replacement for setuptools_. While setuptools should work well on +Python2 versions, `Distribute`_ allows to install py.test on Python3 +and it avoids issue on Windows. With either packaging system +you can type:: + + easy_install -U py + +to get the latest release of the py lib and py.test. The ``-U`` switch +will trigger an upgrade if you already have an older version installed. +On Linux systems you may need to execute the command as superuser and +on Windows you might need to write down the full path to ``easy_install``. + +The py lib and its tools are expected to work well on Linux, +Windows and OSX, Python versions 2.4, 2.5, 2.6 through to +the Python3 versions 3.0 and 3.1. Jython + +.. _mercurial: http://mercurial.selenic.com/wiki/ +.. _`Distribute`: +.. _`Distribute for installation`: http://pypi.python.org/pypi/distribute#installation-instructions +.. _`distribute installation`: http://pypi.python.org/pypi/distribute +.. _checkout: +.. _tarball: + +Working from version control or a tarball +================================================= + +To follow development or help with fixing things +for the next release, checkout the complete code +and documentation source with mercurial_:: + + hg clone https://bitbucket.org/hpk42/py-trunk/ + +This currrently contains a 1.0.x branch and the +default 'trunk' branch where mainline development +takes place. + +.. There also is a readonly subversion + checkout available which contains the latest release:: + svn co https://codespeak.net/svn/py/dist + +You can go to the python package index and +download and unpack a TAR file:: + + http://pypi.python.org/pypi/py/ + + +activating a checkout with setuptools +-------------------------------------------- + +With a working `Distribute`_ or setuptools_ installation you can type:: + + python setup.py develop + +in order to work with the tools and the lib of your checkout. + +.. _`no-setuptools`: + +.. _`directly use a checkout`: + +directly use a checkout or tarball +------------------------------------------------------------- + +Once you got yourself a checkout_ or tarball_ you only need to +set ``PYTHONPATH`` and ``PATH`` environment variables. +It is usually a good idea to add the parent directory of the ``py`` package +directory to your ``PYTHONPATH`` and ``py/bin`` or ``py\bin\win32`` to your +system wide ``PATH`` settings. There are helper scripts that set ``PYTHONPATH`` and ``PATH`` on your system: + +on windows execute:: + + # inside autoexec.bat or shell startup + c:\\path\to\checkout\bin\env.cmd + +on linux/OSX add this to your shell initialization:: + + # inside .bashrc + eval `python ~/path/to/checkout/bin/env.py` + +both of which which will get you good settings +for ``PYTHONPATH`` and ``PATH``. + + +note: scripts look for "nearby" py-lib +----------------------------------------------------- + +Note that all `command line scripts`_ will look +for "nearby" py libs, so if you have a layout like this:: + + mypkg/ + subpkg1/ + tests/ + tests/ + py/ + +issuing ``py.test subpkg1`` will use the py lib +from that projects root directory. + +.. _`command line scripts`: bin.html + +Debian and RPM packages +=================================== + +As of August 2009 pytest/pylib 1.0 RPMs and Debian packages +are not available. You will only find 0.9 versions - +on Debian systems look for ``python-codespeak-lib`` +and Dwayne Bailey has put together a Fedora `RPM`_. + +If you can help with providing/upgrading distribution +packages please use of the contact_ channels in case +of questions or need for changes. + +.. _contact: contact.html + +.. _`RPM`: http://translate.sourceforge.net/releases/testing/fedora/pylib-0.9.2-1.fc9.noarch.rpm + +.. _`setuptools`: http://pypi.python.org/pypi/setuptools + --- /dev/null +++ b/doc/download.html @@ -0,0 +1,18 @@ + + + + + + + + + + + --- a/doc/misc.txt +++ b/doc/misc.txt @@ -5,7 +5,7 @@ Miscellaneous features of the py lib Mapping the standard python library into py =========================================== -The :api:`py.std` object allows lazy access to +The ``py.std`` object allows lazy access to standard library modules. For example, to get to the print-exception functionality of the standard library you can write:: @@ -21,9 +21,9 @@ Support for interaction with system util ====================================================== Currently, the py lib offers two ways to interact with -system executables. :api:`py.process.cmdexec()` invokes +system executables. ``py.process.cmdexec()`` invokes the shell in order to execute a string. The other -one, :api:`py.path.local`'s 'sysexec()' method lets you +one, ``py.path.local``'s 'sysexec()' method lets you directly execute a binary. Both approaches will raise an exception in case of a return- @@ -87,28 +87,7 @@ right version:: binsvn = py.path.local.sysfind('svn', checker=mysvn) - Cross-Python Version compatibility helpers ============================================= -sources: - - * :source:`py/builtin/` - -The compat and builtin namespaces help to write code using newer python features on older python interpreters. - -:api:`py.builtin` ------------------ - -:api:`py.builtin` provides builtin functions/types that were added in later Python -versions. If the used Python version used does not provide these builtins the -py lib provides some reimplementations. These currently are: - - * enumerate - * reversed - * sorted - * BaseException - * set and frozenset (using either the builtin, if available, or the sets - module) - -:api:`py.builtin.BaseException` is just ``Exception`` before Python 2.5. +The ``py.builtin`` namespace provides a number of helpers that help to write python code compatible across Python interpreters, mainly Python2 and Python3. Type ``help(py.builtin)`` on a Python prompt for a the selection of builtins. --- a/.hgignore +++ b/.hgignore @@ -17,3 +17,4 @@ syntax:glob build/ dist/ py.egg-info +issue/ --- a/bin-for-dist/makepluginlist.py +++ b/bin-for-dist/makepluginlist.py @@ -217,7 +217,7 @@ class PluginDoc(RestWriter): self.links.append(('plugins', 'index.html')) self.links.append(('get in contact', '../../contact.html')) self.links.append(('checkout the py.test development version', - '../../download.html#checkout')) + '../../install.html#checkout')) if 0: # this breaks the page layout and makes large doc files #self.h2("plugin source code") --- a/doc/io.txt +++ b/doc/io.txt @@ -9,10 +9,10 @@ execution of a program. IO Capturing examples =============================================== -:api:`py.io.StdCapture` +``py.io.StdCapture`` --------------------------- -Basic Example: +Basic Example:: >>> import py >>> capture = py.io.StdCapture() @@ -21,7 +21,7 @@ Basic Example: >>> out.strip() == "hello" True -For calling functions you may use a shortcut: +For calling functions you may use a shortcut:: >>> import py >>> def f(): print "hello" @@ -29,14 +29,14 @@ For calling functions you may use a shor >>> out.strip() == "hello" True -:api:`py.io.StdCaptureFD` +``py.io.StdCaptureFD`` --------------------------- If you also want to capture writes to the stdout/stderr -filedescriptors you may invoke: +filedescriptors you may invoke:: >>> import py, sys - >>> capture = py.io.StdCaptureFD() + >>> capture = py.io.StdCaptureFD(out=False, in_=False) >>> sys.stderr.write("world") >>> out,err = capture.reset() >>> err --- a/doc/confrest.py +++ b/doc/confrest.py @@ -57,7 +57,7 @@ pageTracker._trackPageview(); def fill_menubar(self): items = [ - self.a_docref("install", "download.html"), + self.a_docref("install", "install.html"), self.a_docref("contact", "contact.html"), self.a_docref("changelog", "changelog.html"), self.a_docref("faq", "faq.html"), From commits-noreply at bitbucket.org Mon Nov 2 14:53:30 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Mon, 2 Nov 2009 13:53:30 +0000 (UTC) Subject: [py-svn] py-trunk commit 6a83f1036449: adding more alternatives as asked for by bluebird75 Message-ID: <20091102135330.19F407EFAF@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1257169974 -3600 # Node ID 6a83f103644952a2d03286298f89cd371a38be6a # Parent e64e3c4341279dc9377fec0676ef33c911e9ccb1 adding more alternatives as asked for by bluebird75 --- a/doc/faq.txt +++ b/doc/faq.txt @@ -125,7 +125,7 @@ and implement the `parametrization schem py.test interaction with other packages =============================================== -What's up with multiprocess on Windows? +Issues with py.test, multiprocess and setuptools? ------------------------------------------------------------ On windows the multiprocess package will instantiate sub processes @@ -133,14 +133,23 @@ by pickling and thus implicitely re-impo Unfortuantely, setuptools-0.6.11 does not ``if __name__=='__main__'`` protect its generated command line script. This leads to infinite recursion when running a test that instantiates Processes. -There are two workarounds: +There are these workarounds: * `install Distribute`_ as a drop-in replacement for setuptools - and re-install py.test + and install py.test * `directly use a checkout`_ which avoids all setuptools/Distribute installation +If those options are not available to you, you may also manually +fix the script that is created by setuptools by inserting an +``if __name__ == '__main__'``. Or you can create a "pytest.py" +script with this content and invoke that with the python version:: + + import py + if __name__ == '__main__': + py.cmdline.pytest() + .. _`directly use a checkout`: install.html#directly-use-a-checkout .. _`install distribute`: http://pypi.python.org/pypi/distribute#installation-instructions From commits-noreply at bitbucket.org Wed Nov 4 21:34:44 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Wed, 4 Nov 2009 20:34:44 +0000 (UTC) Subject: [py-svn] py-trunk commit e02499003290: make py lib a self-contained directory again Message-ID: <20091104203444.4061F7EEE6@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1257366847 -3600 # Node ID e024990032907e7def8bbe8e785dc26ef94ce38e # Parent 6a83f103644952a2d03286298f89cd371a38be6a make py lib a self-contained directory again - move and merge _py/ bits back to py/ - fixes all around --- a/_py/io/terminalwriter.py +++ /dev/null @@ -1,264 +0,0 @@ -""" - -Helper functions for writing to terminals and files. - -""" - - -import sys, os -import py - -def _getdimensions(): - import termios,fcntl,struct - call = fcntl.ioctl(0,termios.TIOCGWINSZ,"\000"*8) - height,width = struct.unpack( "hhhh", call ) [:2] - return height, width - -if sys.platform == 'win32': - # ctypes access to the Windows console - - STD_OUTPUT_HANDLE = -11 - STD_ERROR_HANDLE = -12 - FOREGROUND_BLUE = 0x0001 # text color contains blue. - FOREGROUND_GREEN = 0x0002 # text color contains green. - FOREGROUND_RED = 0x0004 # text color contains red. - FOREGROUND_WHITE = 0x0007 - FOREGROUND_INTENSITY = 0x0008 # text color is intensified. - BACKGROUND_BLUE = 0x0010 # background color contains blue. - BACKGROUND_GREEN = 0x0020 # background color contains green. - BACKGROUND_RED = 0x0040 # background color contains red. - BACKGROUND_WHITE = 0x0070 - BACKGROUND_INTENSITY = 0x0080 # background color is intensified. - - def GetStdHandle(kind): - import ctypes - return ctypes.windll.kernel32.GetStdHandle(kind) - - def SetConsoleTextAttribute(handle, attr): - import ctypes - ctypes.windll.kernel32.SetConsoleTextAttribute( - handle, attr) - - def _getdimensions(): - import ctypes - from ctypes import wintypes - - SHORT = ctypes.c_short - class COORD(ctypes.Structure): - _fields_ = [('X', SHORT), - ('Y', SHORT)] - class SMALL_RECT(ctypes.Structure): - _fields_ = [('Left', SHORT), - ('Top', SHORT), - ('Right', SHORT), - ('Bottom', SHORT)] - class CONSOLE_SCREEN_BUFFER_INFO(ctypes.Structure): - _fields_ = [('dwSize', COORD), - ('dwCursorPosition', COORD), - ('wAttributes', wintypes.WORD), - ('srWindow', SMALL_RECT), - ('dwMaximumWindowSize', COORD)] - STD_OUTPUT_HANDLE = -11 - handle = GetStdHandle(STD_OUTPUT_HANDLE) - info = CONSOLE_SCREEN_BUFFER_INFO() - ctypes.windll.kernel32.GetConsoleScreenBufferInfo( - handle, ctypes.byref(info)) - # Substract one from the width, otherwise the cursor wraps - # and the ending \n causes an empty line to display. - return info.dwSize.Y, info.dwSize.X - 1 - -def get_terminal_width(): - try: - height, width = _getdimensions() - except (SystemExit, KeyboardInterrupt): - raise - except: - # FALLBACK - width = int(os.environ.get('COLUMNS', 80))-1 - # XXX the windows getdimensions may be bogus, let's sanify a bit - width = max(width, 40) # we alaways need 40 chars - return width - -terminal_width = get_terminal_width() - -# XXX unify with _escaped func below -def ansi_print(text, esc, file=None, newline=True, flush=False): - if file is None: - file = sys.stderr - text = text.rstrip() - if esc and not isinstance(esc, tuple): - esc = (esc,) - if esc and sys.platform != "win32" and file.isatty(): - text = (''.join(['\x1b[%sm' % cod for cod in esc]) + - text + - '\x1b[0m') # ANSI color code "reset" - if newline: - text += '\n' - - if esc and sys.platform == "win32" and file.isatty(): - if 1 in esc: - bold = True - esc = tuple([x for x in esc if x != 1]) - else: - bold = False - esctable = {() : FOREGROUND_WHITE, # normal - (31,): FOREGROUND_RED, # red - (32,): FOREGROUND_GREEN, # green - (33,): FOREGROUND_GREEN|FOREGROUND_RED, # yellow - (34,): FOREGROUND_BLUE, # blue - (35,): FOREGROUND_BLUE|FOREGROUND_RED, # purple - (36,): FOREGROUND_BLUE|FOREGROUND_GREEN, # cyan - (37,): FOREGROUND_WHITE, # white - (39,): FOREGROUND_WHITE, # reset - } - attr = esctable.get(esc, FOREGROUND_WHITE) - if bold: - attr |= FOREGROUND_INTENSITY - STD_OUTPUT_HANDLE = -11 - STD_ERROR_HANDLE = -12 - if file is sys.stderr: - handle = GetStdHandle(STD_ERROR_HANDLE) - else: - handle = GetStdHandle(STD_OUTPUT_HANDLE) - SetConsoleTextAttribute(handle, attr) - file.write(text) - SetConsoleTextAttribute(handle, FOREGROUND_WHITE) - else: - file.write(text) - - if flush: - file.flush() - -def should_do_markup(file): - return hasattr(file, 'isatty') and file.isatty() \ - and os.environ.get('TERM') != 'dumb' - -class TerminalWriter(object): - _esctable = dict(black=30, red=31, green=32, yellow=33, - blue=34, purple=35, cyan=36, white=37, - Black=40, Red=41, Green=42, Yellow=43, - Blue=44, Purple=45, Cyan=46, White=47, - bold=1, light=2, blink=5, invert=7) - - # XXX deprecate stringio argument - def __init__(self, file=None, stringio=False, encoding=None): - self.encoding = encoding - - if file is None: - if stringio: - self.stringio = file = py.io.TextIO() - else: - file = py.std.sys.stdout - elif hasattr(file, '__call__'): - file = WriteFile(file, encoding=encoding) - self._file = file - self.fullwidth = get_terminal_width() - self.hasmarkup = should_do_markup(file) - - def _escaped(self, text, esc): - if esc and self.hasmarkup: - text = (''.join(['\x1b[%sm' % cod for cod in esc]) + - text +'\x1b[0m') - return text - - def markup(self, text, **kw): - esc = [] - for name in kw: - if name not in self._esctable: - raise ValueError("unknown markup: %r" %(name,)) - if kw[name]: - esc.append(self._esctable[name]) - return self._escaped(text, tuple(esc)) - - def sep(self, sepchar, title=None, fullwidth=None, **kw): - if fullwidth is None: - fullwidth = self.fullwidth - # the goal is to have the line be as long as possible - # under the condition that len(line) <= fullwidth - if title is not None: - # we want 2 + 2*len(fill) + len(title) <= fullwidth - # i.e. 2 + 2*len(sepchar)*N + len(title) <= fullwidth - # 2*len(sepchar)*N <= fullwidth - len(title) - 2 - # N <= (fullwidth - len(title) - 2) // (2*len(sepchar)) - N = (fullwidth - len(title) - 2) // (2*len(sepchar)) - fill = sepchar * N - line = "%s %s %s" % (fill, title, fill) - else: - # we want len(sepchar)*N <= fullwidth - # i.e. N <= fullwidth // len(sepchar) - line = sepchar * (fullwidth // len(sepchar)) - # in some situations there is room for an extra sepchar at the right, - # in particular if we consider that with a sepchar like "_ " the - # trailing space is not important at the end of the line - if len(line) + len(sepchar.rstrip()) <= fullwidth: - line += sepchar.rstrip() - - self.line(line, **kw) - - def write(self, s, **kw): - if s: - s = self._getbytestring(s) - if self.hasmarkup and kw: - s = self.markup(s, **kw) - self._file.write(s) - self._file.flush() - - def _getbytestring(self, s): - # XXX review this and the whole logic - if self.encoding and sys.version_info < (3,0) and isinstance(s, unicode): - return s.encode(self.encoding) - elif not isinstance(s, str): - return str(s) - return s - - def line(self, s='', **kw): - self.write(s, **kw) - self.write('\n') - -class Win32ConsoleWriter(TerminalWriter): - def write(self, s, **kw): - if s: - s = self._getbytestring(s) - if self.hasmarkup: - handle = GetStdHandle(STD_OUTPUT_HANDLE) - - if self.hasmarkup and kw: - attr = 0 - if kw.pop('bold', False): - attr |= FOREGROUND_INTENSITY - - if kw.pop('red', False): - attr |= FOREGROUND_RED - elif kw.pop('blue', False): - attr |= FOREGROUND_BLUE - elif kw.pop('green', False): - attr |= FOREGROUND_GREEN - else: - attr |= FOREGROUND_WHITE - - SetConsoleTextAttribute(handle, attr) - self._file.write(s) - self._file.flush() - if self.hasmarkup: - SetConsoleTextAttribute(handle, FOREGROUND_WHITE) - - def line(self, s="", **kw): - self.write(s+"\n", **kw) - -if sys.platform == 'win32': - TerminalWriter = Win32ConsoleWriter - -class WriteFile(object): - def __init__(self, writemethod, encoding=None): - self.encoding = encoding - self._writemethod = writemethod - - def write(self, data): - if self.encoding: - data = data.encode(self.encoding) - self._writemethod(data) - - def flush(self): - return - - --- a/_py/code/source.py +++ /dev/null @@ -1,347 +0,0 @@ -from __future__ import generators -import sys -import inspect, tokenize -import py -cpy_compile = compile - -try: - import _ast - from _ast import PyCF_ONLY_AST as _AST_FLAG -except ImportError: - _AST_FLAG = 0 - _ast = None - - -class Source(object): - """ a immutable object holding a source code fragment, - possibly deindenting it. - """ - def __init__(self, *parts, **kwargs): - self.lines = lines = [] - de = kwargs.get('deindent', True) - rstrip = kwargs.get('rstrip', True) - for part in parts: - if not part: - partlines = [] - if isinstance(part, Source): - partlines = part.lines - elif isinstance(part, py.builtin._basestring): - partlines = part.split('\n') - if rstrip: - while partlines: - if partlines[-1].strip(): - break - partlines.pop() - else: - partlines = getsource(part, deindent=de).lines - if de: - partlines = deindent(partlines) - lines.extend(partlines) - - def __eq__(self, other): - try: - return self.lines == other.lines - except AttributeError: - if isinstance(other, str): - return str(self) == other - return False - - def __getitem__(self, key): - if isinstance(key, int): - return self.lines[key] - else: - if key.step not in (None, 1): - raise IndexError("cannot slice a Source with a step") - return self.__getslice__(key.start, key.stop) - - def __len__(self): - return len(self.lines) - - def __getslice__(self, start, end): - newsource = Source() - newsource.lines = self.lines[start:end] - return newsource - - def strip(self): - """ return new source object with trailing - and leading blank lines removed. - """ - start, end = 0, len(self) - while start < end and not self.lines[start].strip(): - start += 1 - while end > start and not self.lines[end-1].strip(): - end -= 1 - source = Source() - source.lines[:] = self.lines[start:end] - return source - - def putaround(self, before='', after='', indent=' ' * 4): - """ return a copy of the source object with - 'before' and 'after' wrapped around it. - """ - before = Source(before) - after = Source(after) - newsource = Source() - lines = [ (indent + line) for line in self.lines] - newsource.lines = before.lines + lines + after.lines - return newsource - - def indent(self, indent=' ' * 4): - """ return a copy of the source object with - all lines indented by the given indent-string. - """ - newsource = Source() - newsource.lines = [(indent+line) for line in self.lines] - return newsource - - def getstatement(self, lineno): - """ return Source statement which contains the - given linenumber (counted from 0). - """ - start, end = self.getstatementrange(lineno) - return self[start:end] - - def getstatementrange(self, lineno): - """ return (start, end) tuple which spans the minimal - statement region which containing the given lineno. - """ - # XXX there must be a better than these heuristic ways ... - # XXX there may even be better heuristics :-) - if not (0 <= lineno < len(self)): - raise IndexError("lineno out of range") - - # 1. find the start of the statement - from codeop import compile_command - for start in range(lineno, -1, -1): - trylines = self.lines[start:lineno+1] - # quick hack to indent the source and get it as a string in one go - trylines.insert(0, 'def xxx():') - trysource = '\n '.join(trylines) - # ^ space here - try: - compile_command(trysource) - except (SyntaxError, OverflowError, ValueError): - pass - else: - break # got a valid or incomplete statement - - # 2. find the end of the statement - for end in range(lineno+1, len(self)+1): - trysource = self[start:end] - if trysource.isparseable(): - break - - return start, end - - def getblockend(self, lineno): - # XXX - lines = [x + '\n' for x in self.lines[lineno:]] - blocklines = inspect.getblock(lines) - #print blocklines - return lineno + len(blocklines) - 1 - - def deindent(self, offset=None): - """ return a new source object deindented by offset. - If offset is None then guess an indentation offset from - the first non-blank line. Subsequent lines which have a - lower indentation offset will be copied verbatim as - they are assumed to be part of multilines. - """ - # XXX maybe use the tokenizer to properly handle multiline - # strings etc.pp? - newsource = Source() - newsource.lines[:] = deindent(self.lines, offset) - return newsource - - def isparseable(self, deindent=True): - """ return True if source is parseable, heuristically - deindenting it by default. - """ - try: - import parser - except ImportError: - syntax_checker = lambda x: compile(x, 'asd', 'exec') - else: - syntax_checker = parser.suite - - if deindent: - source = str(self.deindent()) - else: - source = str(self) - try: - #compile(source+'\n', "x", "exec") - syntax_checker(source+'\n') - except SyntaxError: - return False - else: - return True - - def __str__(self): - return "\n".join(self.lines) - - def compile(self, filename=None, mode='exec', - flag=generators.compiler_flag, - dont_inherit=0, _genframe=None): - """ return compiled code object. if filename is None - invent an artificial filename which displays - the source/line position of the caller frame. - """ - if not filename or py.path.local(filename).check(file=0): - if _genframe is None: - _genframe = sys._getframe(1) # the caller - fn,lineno = _genframe.f_code.co_filename, _genframe.f_lineno - if not filename: - filename = '' % (fn, lineno) - else: - filename = '' % (filename, fn, lineno) - source = "\n".join(self.lines) + '\n' - try: - co = cpy_compile(source, filename, mode, flag) - except SyntaxError: - ex = sys.exc_info()[1] - # re-represent syntax errors from parsing python strings - msglines = self.lines[:ex.lineno] - if ex.offset: - msglines.append(" "*ex.offset + '^') - msglines.append("syntax error probably generated here: %s" % filename) - newex = SyntaxError('\n'.join(msglines)) - newex.offset = ex.offset - newex.lineno = ex.lineno - newex.text = ex.text - raise newex - else: - if flag & _AST_FLAG: - return co - co_filename = MyStr(filename) - co_filename.__source__ = self - return py.code.Code(co).new(rec=1, co_filename=co_filename) - #return newcode_withfilename(co, co_filename) - -# -# public API shortcut functions -# - -def compile_(source, filename=None, mode='exec', flags= - generators.compiler_flag, dont_inherit=0): - """ compile the given source to a raw code object, - which points back to the source code through - "co_filename.__source__". All code objects - contained in the code object will recursively - also have this special subclass-of-string - filename. - """ - if _ast is not None and isinstance(source, _ast.AST): - # XXX should Source support having AST? - return cpy_compile(source, filename, mode, flags, dont_inherit) - _genframe = sys._getframe(1) # the caller - s = Source(source) - co = s.compile(filename, mode, flags, _genframe=_genframe) - return co - - -def getfslineno(obj): - try: - code = py.code.Code(obj) - except TypeError: - # fallback to - fn = (py.std.inspect.getsourcefile(obj) or - py.std.inspect.getfile(obj)) - fspath = fn and py.path.local(fn) or None - if fspath: - try: - _, lineno = findsource(obj) - except IOError: - lineno = None - else: - lineno = None - else: - fspath = code.path - lineno = code.firstlineno - return fspath, lineno - -# -# helper functions -# -class MyStr(str): - """ custom string which allows to add attributes. """ - -def findsource(obj): - obj = py.code.getrawcode(obj) - try: - fullsource = obj.co_filename.__source__ - except AttributeError: - try: - sourcelines, lineno = py.std.inspect.findsource(obj) - except (KeyboardInterrupt, SystemExit): - raise - except: - return None, None - source = Source() - source.lines = [line.rstrip() for line in sourcelines] - return source, lineno - else: - lineno = obj.co_firstlineno - 1 - return fullsource, lineno - - -def getsource(obj, **kwargs): - obj = py.code.getrawcode(obj) - try: - fullsource = obj.co_filename.__source__ - except AttributeError: - try: - strsrc = inspect.getsource(obj) - except IndentationError: - strsrc = "\"Buggy python version consider upgrading, cannot get source\"" - assert isinstance(strsrc, str) - return Source(strsrc, **kwargs) - else: - lineno = obj.co_firstlineno - 1 - end = fullsource.getblockend(lineno) - return Source(fullsource[lineno:end+1], deident=True) - - -def deindent(lines, offset=None): - if offset is None: - for line in lines: - line = line.expandtabs() - s = line.lstrip() - if s: - offset = len(line)-len(s) - break - else: - offset = 0 - if offset == 0: - return list(lines) - newlines = [] - def readline_generator(lines): - for line in lines: - yield line + '\n' - while True: - yield '' - - r = readline_generator(lines) - try: - readline = r.next - except AttributeError: - readline = r.__next__ - - try: - for _, _, (sline, _), (eline, _), _ in tokenize.generate_tokens(readline): - if sline > len(lines): - break # End of input reached - if sline > len(newlines): - line = lines[sline - 1].expandtabs() - if line.lstrip() and line[:offset].isspace(): - line = line[offset:] # Deindent - newlines.append(line) - - for i in range(sline, eline): - # Don't deindent continuing lines of - # multiline tokens (i.e. multiline strings) - newlines.append(lines[i]) - except (IndentationError, tokenize.TokenError): - pass - # Add any lines we didn't see. E.g. if an exception was raised. - newlines.extend(lines[len(newlines):]) - return newlines --- a/_py/path/svnurl.py +++ /dev/null @@ -1,365 +0,0 @@ -""" -module defining a subversion path object based on the external -command 'svn'. This modules aims to work with svn 1.3 and higher -but might also interact well with earlier versions. -""" - -import os, sys, time, re -import py -from py import path, process -from _py.path import common -from _py.path import svnwc as svncommon -from _py.path.cacheutil import BuildcostAccessCache, AgingCache - -DEBUG=False - -class SvnCommandPath(svncommon.SvnPathBase): - """ path implementation that offers access to (possibly remote) subversion - repositories. """ - - _lsrevcache = BuildcostAccessCache(maxentries=128) - _lsnorevcache = AgingCache(maxentries=1000, maxseconds=60.0) - - def __new__(cls, path, rev=None, auth=None): - self = object.__new__(cls) - if isinstance(path, cls): - rev = path.rev - auth = path.auth - path = path.strpath - svncommon.checkbadchars(path) - path = path.rstrip('/') - self.strpath = path - self.rev = rev - self.auth = auth - return self - - def __repr__(self): - if self.rev == -1: - return 'svnurl(%r)' % self.strpath - else: - return 'svnurl(%r, %r)' % (self.strpath, self.rev) - - def _svnwithrev(self, cmd, *args): - """ execute an svn command, append our own url and revision """ - if self.rev is None: - return self._svnwrite(cmd, *args) - else: - args = ['-r', self.rev] + list(args) - return self._svnwrite(cmd, *args) - - def _svnwrite(self, cmd, *args): - """ execute an svn command, append our own url """ - l = ['svn %s' % cmd] - args = ['"%s"' % self._escape(item) for item in args] - l.extend(args) - l.append('"%s"' % self._encodedurl()) - # fixing the locale because we can't otherwise parse - string = " ".join(l) - if DEBUG: - print("execing %s" % string) - out = self._svncmdexecauth(string) - return out - - def _svncmdexecauth(self, cmd): - """ execute an svn command 'as is' """ - cmd = svncommon.fixlocale() + cmd - if self.auth is not None: - cmd += ' ' + self.auth.makecmdoptions() - return self._cmdexec(cmd) - - def _cmdexec(self, cmd): - try: - out = process.cmdexec(cmd) - except py.process.cmdexec.Error: - e = sys.exc_info()[1] - if (e.err.find('File Exists') != -1 or - e.err.find('File already exists') != -1): - raise py.error.EEXIST(self) - raise - return out - - def _svnpopenauth(self, cmd): - """ execute an svn command, return a pipe for reading stdin """ - cmd = svncommon.fixlocale() + cmd - if self.auth is not None: - cmd += ' ' + self.auth.makecmdoptions() - return self._popen(cmd) - - def _popen(self, cmd): - return os.popen(cmd) - - def _encodedurl(self): - return self._escape(self.strpath) - - def _norev_delentry(self, path): - auth = self.auth and self.auth.makecmdoptions() or None - self._lsnorevcache.delentry((str(path), auth)) - - def open(self, mode='r'): - """ return an opened file with the given mode. """ - if mode not in ("r", "rU",): - raise ValueError("mode %r not supported" % (mode,)) - assert self.check(file=1) # svn cat returns an empty file otherwise - if self.rev is None: - return self._svnpopenauth('svn cat "%s"' % ( - self._escape(self.strpath), )) - else: - return self._svnpopenauth('svn cat -r %s "%s"' % ( - self.rev, self._escape(self.strpath))) - - def dirpath(self, *args, **kwargs): - """ return the directory path of the current path joined - with any given path arguments. - """ - l = self.strpath.split(self.sep) - if len(l) < 4: - raise py.error.EINVAL(self, "base is not valid") - elif len(l) == 4: - return self.join(*args, **kwargs) - else: - return self.new(basename='').join(*args, **kwargs) - - # modifying methods (cache must be invalidated) - def mkdir(self, *args, **kwargs): - """ create & return the directory joined with args. - pass a 'msg' keyword argument to set the commit message. - """ - commit_msg = kwargs.get('msg', "mkdir by py lib invocation") - createpath = self.join(*args) - createpath._svnwrite('mkdir', '-m', commit_msg) - self._norev_delentry(createpath.dirpath()) - return createpath - - def copy(self, target, msg='copied by py lib invocation'): - """ copy path to target with checkin message msg.""" - if getattr(target, 'rev', None) is not None: - raise py.error.EINVAL(target, "revisions are immutable") - self._svncmdexecauth('svn copy -m "%s" "%s" "%s"' %(msg, - self._escape(self), self._escape(target))) - self._norev_delentry(target.dirpath()) - - def rename(self, target, msg="renamed by py lib invocation"): - """ rename this path to target with checkin message msg. """ - if getattr(self, 'rev', None) is not None: - raise py.error.EINVAL(self, "revisions are immutable") - self._svncmdexecauth('svn move -m "%s" --force "%s" "%s"' %( - msg, self._escape(self), self._escape(target))) - self._norev_delentry(self.dirpath()) - self._norev_delentry(self) - - def remove(self, rec=1, msg='removed by py lib invocation'): - """ remove a file or directory (or a directory tree if rec=1) with -checkin message msg.""" - if self.rev is not None: - raise py.error.EINVAL(self, "revisions are immutable") - self._svncmdexecauth('svn rm -m "%s" "%s"' %(msg, self._escape(self))) - self._norev_delentry(self.dirpath()) - - def export(self, topath): - """ export to a local path - - topath should not exist prior to calling this, returns a - py.path.local instance - """ - topath = py.path.local(topath) - args = ['"%s"' % (self._escape(self),), - '"%s"' % (self._escape(topath),)] - if self.rev is not None: - args = ['-r', str(self.rev)] + args - self._svncmdexecauth('svn export %s' % (' '.join(args),)) - return topath - - def ensure(self, *args, **kwargs): - """ ensure that an args-joined path exists (by default as - a file). If you specify a keyword argument 'dir=True' - then the path is forced to be a directory path. - """ - if getattr(self, 'rev', None) is not None: - raise py.error.EINVAL(self, "revisions are immutable") - target = self.join(*args) - dir = kwargs.get('dir', 0) - for x in target.parts(reverse=True): - if x.check(): - break - else: - raise py.error.ENOENT(target, "has not any valid base!") - if x == target: - if not x.check(dir=dir): - raise dir and py.error.ENOTDIR(x) or py.error.EISDIR(x) - return x - tocreate = target.relto(x) - basename = tocreate.split(self.sep, 1)[0] - tempdir = py.path.local.mkdtemp() - try: - tempdir.ensure(tocreate, dir=dir) - cmd = 'svn import -m "%s" "%s" "%s"' % ( - "ensure %s" % self._escape(tocreate), - self._escape(tempdir.join(basename)), - x.join(basename)._encodedurl()) - self._svncmdexecauth(cmd) - self._norev_delentry(x) - finally: - tempdir.remove() - return target - - # end of modifying methods - def _propget(self, name): - res = self._svnwithrev('propget', name) - return res[:-1] # strip trailing newline - - def _proplist(self): - res = self._svnwithrev('proplist') - lines = res.split('\n') - lines = [x.strip() for x in lines[1:]] - return svncommon.PropListDict(self, lines) - - def _listdir_nameinfo(self): - """ return sequence of name-info directory entries of self """ - def builder(): - try: - res = self._svnwithrev('ls', '-v') - except process.cmdexec.Error: - e = sys.exc_info()[1] - if e.err.find('non-existent in that revision') != -1: - raise py.error.ENOENT(self, e.err) - elif e.err.find('File not found') != -1: - raise py.error.ENOENT(self, e.err) - elif e.err.find('not part of a repository')!=-1: - raise py.error.ENOENT(self, e.err) - elif e.err.find('Unable to open')!=-1: - raise py.error.ENOENT(self, e.err) - elif e.err.lower().find('method not allowed')!=-1: - raise py.error.EACCES(self, e.err) - raise py.error.Error(e.err) - lines = res.split('\n') - nameinfo_seq = [] - for lsline in lines: - if lsline: - info = InfoSvnCommand(lsline) - if info._name != '.': # svn 1.5 produces '.' dirs, - nameinfo_seq.append((info._name, info)) - nameinfo_seq.sort() - return nameinfo_seq - auth = self.auth and self.auth.makecmdoptions() or None - if self.rev is not None: - return self._lsrevcache.getorbuild((self.strpath, self.rev, auth), - builder) - else: - return self._lsnorevcache.getorbuild((self.strpath, auth), - builder) - - def listdir(self, fil=None, sort=None): - """ list directory contents, possibly filter by the given fil func - and possibly sorted. - """ - if isinstance(fil, str): - fil = common.FNMatcher(fil) - nameinfo_seq = self._listdir_nameinfo() - if len(nameinfo_seq) == 1: - name, info = nameinfo_seq[0] - if name == self.basename and info.kind == 'file': - #if not self.check(dir=1): - raise py.error.ENOTDIR(self) - paths = [self.join(name) for (name, info) in nameinfo_seq] - if fil: - paths = [x for x in paths if fil(x)] - self._sortlist(paths, sort) - return paths - - - def log(self, rev_start=None, rev_end=1, verbose=False): - """ return a list of LogEntry instances for this path. -rev_start is the starting revision (defaulting to the first one). -rev_end is the last revision (defaulting to HEAD). -if verbose is True, then the LogEntry instances also know which files changed. -""" - assert self.check() #make it simpler for the pipe - rev_start = rev_start is None and "HEAD" or rev_start - rev_end = rev_end is None and "HEAD" or rev_end - - if rev_start == "HEAD" and rev_end == 1: - rev_opt = "" - else: - rev_opt = "-r %s:%s" % (rev_start, rev_end) - verbose_opt = verbose and "-v" or "" - xmlpipe = self._svnpopenauth('svn log --xml %s %s "%s"' % - (rev_opt, verbose_opt, self.strpath)) - from xml.dom import minidom - tree = minidom.parse(xmlpipe) - result = [] - for logentry in filter(None, tree.firstChild.childNodes): - if logentry.nodeType == logentry.ELEMENT_NODE: - result.append(svncommon.LogEntry(logentry)) - return result - -#01234567890123456789012345678901234567890123467 -# 2256 hpk 165 Nov 24 17:55 __init__.py -# XXX spotted by Guido, SVN 1.3.0 has different aligning, breaks the code!!! -# 1312 johnny 1627 May 05 14:32 test_decorators.py -# -class InfoSvnCommand: - # the '0?' part in the middle is an indication of whether the resource is - # locked, see 'svn help ls' - lspattern = re.compile( - r'^ *(?P\d+) +(?P.+?) +(0? *(?P\d+))? ' - '*(?P\w+ +\d{2} +[\d:]+) +(?P.*)$') - def __init__(self, line): - # this is a typical line from 'svn ls http://...' - #_ 1127 jum 0 Jul 13 15:28 branch/ - match = self.lspattern.match(line) - data = match.groupdict() - self._name = data['file'] - if self._name[-1] == '/': - self._name = self._name[:-1] - self.kind = 'dir' - else: - self.kind = 'file' - #self.has_props = l.pop(0) == 'P' - self.created_rev = int(data['rev']) - self.last_author = data['author'] - self.size = data['size'] and int(data['size']) or 0 - self.mtime = parse_time_with_missing_year(data['date']) - self.time = self.mtime * 1000000 - - def __eq__(self, other): - return self.__dict__ == other.__dict__ - - -#____________________________________________________ -# -# helper functions -#____________________________________________________ -def parse_time_with_missing_year(timestr): - """ analyze the time part from a single line of "svn ls -v" - the svn output doesn't show the year makes the 'timestr' - ambigous. - """ - import calendar - t_now = time.gmtime() - - tparts = timestr.split() - month = time.strptime(tparts.pop(0), '%b')[1] - day = time.strptime(tparts.pop(0), '%d')[2] - last = tparts.pop(0) # year or hour:minute - try: - year = time.strptime(last, '%Y')[0] - hour = minute = 0 - except ValueError: - hour, minute = time.strptime(last, '%H:%M')[3:5] - year = t_now[0] - - t_result = (year, month, day, hour, minute, 0,0,0,0) - if t_result > t_now: - year -= 1 - t_result = (year, month, day, hour, minute, 0,0,0,0) - return calendar.timegm(t_result) - -class PathEntry: - def __init__(self, ppart): - self.strpath = ppart.firstChild.nodeValue.encode('UTF-8') - self.action = ppart.getAttribute('action').encode('UTF-8') - if self.action == 'A': - self.copyfrom_path = ppart.getAttribute('copyfrom-path').encode('UTF-8') - if self.copyfrom_path: - self.copyfrom_rev = int(ppart.getAttribute('copyfrom-rev')) - --- a/_py/builtin.py +++ /dev/null @@ -1,203 +0,0 @@ -import sys - -try: - reversed = reversed -except NameError: - def reversed(sequence): - """reversed(sequence) -> reverse iterator over values of the sequence - - Return a reverse iterator - """ - if hasattr(sequence, '__reversed__'): - return sequence.__reversed__() - if not hasattr(sequence, '__getitem__'): - raise TypeError("argument to reversed() must be a sequence") - return reversed_iterator(sequence) - - class reversed_iterator(object): - - def __init__(self, seq): - self.seq = seq - self.remaining = len(seq) - - def __iter__(self): - return self - - def next(self): - i = self.remaining - if i > 0: - i -= 1 - item = self.seq[i] - self.remaining = i - return item - raise StopIteration - - def __length_hint__(self): - return self.remaining - -try: - sorted = sorted -except NameError: - builtin_cmp = cmp # need to use cmp as keyword arg - - def sorted(iterable, cmp=None, key=None, reverse=0): - use_cmp = None - if key is not None: - if cmp is None: - def use_cmp(x, y): - return builtin_cmp(x[0], y[0]) - else: - def use_cmp(x, y): - return cmp(x[0], y[0]) - l = [(key(element), element) for element in iterable] - else: - if cmp is not None: - use_cmp = cmp - l = list(iterable) - if use_cmp is not None: - l.sort(use_cmp) - else: - l.sort() - if reverse: - l.reverse() - if key is not None: - return [element for (_, element) in l] - return l - -try: - set, frozenset = set, frozenset -except NameError: - from sets import set, frozenset - -# pass through -enumerate = enumerate - -try: - BaseException = BaseException -except NameError: - BaseException = Exception - -try: - GeneratorExit = GeneratorExit -except NameError: - class GeneratorExit(Exception): - """ This exception is never raised, it is there to make it possible to - write code compatible with CPython 2.5 even in lower CPython - versions.""" - pass - GeneratorExit.__module__ = 'exceptions' - -if sys.version_info >= (3, 0): - exec ("print_ = print ; exec_=exec") - import builtins - - # some backward compatibility helpers - _basestring = str - def _totext(obj, encoding): - if isinstance(obj, bytes): - obj = obj.decode(encoding) - elif not isinstance(obj, str): - obj = str(obj) - return obj - - def _isbytes(x): - return isinstance(x, bytes) - def _istext(x): - return isinstance(x, str) - - def _getimself(function): - return getattr(function, '__self__', None) - - def _getfuncdict(function): - return getattr(function, "__dict__", None) - - def execfile(fn, globs=None, locs=None): - if globs is None: - back = sys._getframe(1) - globs = back.f_globals - locs = back.f_locals - del back - elif locs is None: - locs = globs - fp = open(fn, "rb") - try: - source = fp.read() - finally: - fp.close() - co = compile(source, fn, "exec", dont_inherit=True) - exec_(co, globs, locs) - - def callable(obj): - return hasattr(obj, "__call__") - -else: - import __builtin__ as builtins - _totext = unicode - _basestring = basestring - execfile = execfile - callable = callable - def _isbytes(x): - return isinstance(x, str) - def _istext(x): - return isinstance(x, unicode) - - def _getimself(function): - return getattr(function, 'im_self', None) - - def _getfuncdict(function): - return getattr(function, "__dict__", None) - - def print_(*args, **kwargs): - """ minimal backport of py3k print statement. """ - sep = ' ' - if 'sep' in kwargs: - sep = kwargs.pop('sep') - end = '\n' - if 'end' in kwargs: - end = kwargs.pop('end') - file = 'file' in kwargs and kwargs.pop('file') or sys.stdout - if kwargs: - args = ", ".join([str(x) for x in kwargs]) - raise TypeError("invalid keyword arguments: %s" % args) - at_start = True - for x in args: - if not at_start: - file.write(sep) - file.write(str(x)) - at_start = False - file.write(end) - - def exec_(obj, globals=None, locals=None): - """ minimal backport of py3k exec statement. """ - if globals is None: - frame = sys._getframe(1) - globals = frame.f_globals - if locals is None: - locals = frame.f_locals - elif locals is None: - locals = globals - exec2(obj, globals, locals) - -if sys.version_info >= (3,0): - exec (""" -def _reraise(cls, val, tb): - assert hasattr(val, '__traceback__') - raise val -""") -else: - exec (""" -def _reraise(cls, val, tb): - raise cls, val, tb -def exec2(obj, globals, locals): - exec obj in globals, locals -""") - -def _tryimport(*names): - """ return the first successfully imported module. """ - assert names - for name in names: - try: - return __import__(name, None, None, '__doc__') - except ImportError: - excinfo = sys.exc_info() - _reraise(*excinfo) --- a/_py/error.py +++ /dev/null @@ -1,83 +0,0 @@ -""" -create errno-specific classes for IO or os calls. - -""" -import sys, os, errno - -class Error(EnvironmentError): - def __repr__(self): - return "%s.%s %r: %s " %(self.__class__.__module__, - self.__class__.__name__, - self.__class__.__doc__, - " ".join(map(str, self.args)), - #repr(self.args) - ) - - def __str__(self): - s = "[%s]: %s" %(self.__class__.__doc__, - " ".join(map(str, self.args)), - ) - return s - -_winerrnomap = { - 2: errno.ENOENT, - 3: errno.ENOENT, - 17: errno.EEXIST, - 22: errno.ENOTDIR, - 267: errno.ENOTDIR, - 5: errno.EACCES, # anything better? -} - -class ErrorMaker(object): - """ lazily provides Exception classes for each possible POSIX errno - (as defined per the 'errno' module). All such instances - subclass EnvironmentError. - """ - Error = Error - _errno2class = {} - - def __getattr__(self, name): - eno = getattr(errno, name) - cls = self._geterrnoclass(eno) - setattr(self, name, cls) - return cls - - def _geterrnoclass(self, eno): - try: - return self._errno2class[eno] - except KeyError: - clsname = errno.errorcode.get(eno, "UnknownErrno%d" %(eno,)) - errorcls = type(Error)(clsname, (Error,), - {'__module__':'py.error', - '__doc__': os.strerror(eno)}) - self._errno2class[eno] = errorcls - return errorcls - - def checked_call(self, func, *args): - """ call a function and raise an errno-exception if applicable. """ - __tracebackhide__ = True - try: - return func(*args) - except self.Error: - raise - except EnvironmentError: - cls, value, tb = sys.exc_info() - if not hasattr(value, 'errno'): - raise - __tracebackhide__ = False - errno = value.errno - try: - if not isinstance(value, WindowsError): - raise NameError - except NameError: - # we are not on Windows, or we got a proper OSError - cls = self._geterrnoclass(errno) - else: - try: - cls = self._geterrnoclass(_winerrnomap[errno]) - except KeyError: - raise value - raise cls("%s%r" % (func.__name__, args)) - __tracebackhide__ = True - -error = ErrorMaker() --- a/_py/path/gateway/remotepath.py +++ /dev/null @@ -1,47 +0,0 @@ -import py, itertools -from _py.path import common - -COUNTER = itertools.count() - -class RemotePath(common.PathBase): - sep = '/' - - def __init__(self, channel, id, basename=None): - self._channel = channel - self._id = id - self._basename = basename - self._specs = {} - - def __del__(self): - self._channel.send(('DEL', self._id)) - - def __repr__(self): - return 'RemotePath(%s)' % self.basename - - def listdir(self, *args): - self._channel.send(('LIST', self._id) + args) - return [RemotePath(self._channel, id, basename) - for (id, basename) in self._channel.receive()] - - def dirpath(self): - id = ~COUNTER.next() - self._channel.send(('DIRPATH', self._id, id)) - return RemotePath(self._channel, id) - - def join(self, *args): - id = ~COUNTER.next() - self._channel.send(('JOIN', self._id, id) + args) - return RemotePath(self._channel, id) - - def _getbyspec(self, spec): - parts = spec.split(',') - ask = [x for x in parts if x not in self._specs] - if ask: - self._channel.send(('GET', self._id, ",".join(ask))) - for part, value in zip(ask, self._channel.receive()): - self._specs[part] = value - return [self._specs[x] for x in parts] - - def read(self): - self._channel.send(('READ', self._id)) - return self._channel.receive() --- a/_py/path/gateway/channeltest2.py +++ /dev/null @@ -1,21 +0,0 @@ -import py -from remotepath import RemotePath - - -SRC = open('channeltest.py', 'r').read() - -SRC += ''' -import py -srv = PathServer(channel.receive()) -channel.send(srv.p2c(py.path.local("/tmp"))) -''' - - -#gw = execnet.SshGateway('codespeak.net') -gw = execnet.PopenGateway() -gw.remote_init_threads(5) -c = gw.remote_exec(SRC, stdout=py.std.sys.stdout, stderr=py.std.sys.stderr) -subchannel = gw._channelfactory.new() -c.send(subchannel) - -p = RemotePath(subchannel, c.receive()) --- a/_py/process/forkedfunc.py +++ /dev/null @@ -1,108 +0,0 @@ - -""" - ForkedFunc provides a way to run a function in a forked process - and get at its return value, stdout and stderr output as well - as signals and exitstatusus. - - XXX see if tempdir handling is sane -""" - -import py -import os -import sys -import marshal - -class ForkedFunc(object): - EXITSTATUS_EXCEPTION = 3 - def __init__(self, fun, args=None, kwargs=None, nice_level=0): - if args is None: - args = [] - if kwargs is None: - kwargs = {} - self.fun = fun - self.args = args - self.kwargs = kwargs - self.tempdir = tempdir = py.path.local.mkdtemp() - self.RETVAL = tempdir.ensure('retval') - self.STDOUT = tempdir.ensure('stdout') - self.STDERR = tempdir.ensure('stderr') - - pid = os.fork() - if pid: # in parent process - self.pid = pid - else: # in child process - self._child(nice_level) - - def _child(self, nice_level): - # right now we need to call a function, but first we need to - # map all IO that might happen - # make sure sys.stdout points to file descriptor one - sys.stdout = stdout = self.STDOUT.open('w') - sys.stdout.flush() - fdstdout = stdout.fileno() - if fdstdout != 1: - os.dup2(fdstdout, 1) - sys.stderr = stderr = self.STDERR.open('w') - fdstderr = stderr.fileno() - if fdstderr != 2: - os.dup2(fdstderr, 2) - retvalf = self.RETVAL.open("wb") - EXITSTATUS = 0 - try: - if nice_level: - os.nice(nice_level) - try: - retval = self.fun(*self.args, **self.kwargs) - retvalf.write(marshal.dumps(retval)) - except: - excinfo = py.code.ExceptionInfo() - stderr.write(excinfo.exconly()) - EXITSTATUS = self.EXITSTATUS_EXCEPTION - finally: - stdout.close() - stderr.close() - retvalf.close() - os.close(1) - os.close(2) - os._exit(EXITSTATUS) - - def waitfinish(self, waiter=os.waitpid): - pid, systemstatus = waiter(self.pid, 0) - if systemstatus: - if os.WIFSIGNALED(systemstatus): - exitstatus = os.WTERMSIG(systemstatus) + 128 - else: - exitstatus = os.WEXITSTATUS(systemstatus) - #raise ExecutionFailed(status, systemstatus, cmd, - # ''.join(out), ''.join(err)) - else: - exitstatus = 0 - signal = systemstatus & 0x7f - if not exitstatus and not signal: - retval = self.RETVAL.open('rb') - try: - retval_data = retval.read() - finally: - retval.close() - retval = marshal.loads(retval_data) - else: - retval = None - stdout = self.STDOUT.read() - stderr = self.STDERR.read() - self._removetemp() - return Result(exitstatus, signal, retval, stdout, stderr) - - def _removetemp(self): - if self.tempdir.check(): - self.tempdir.remove() - - def __del__(self): - self._removetemp() - -class Result(object): - def __init__(self, exitstatus, signal, retval, stdout, stderr): - self.exitstatus = exitstatus - self.signal = signal - self.retval = retval - self.out = stdout - self.err = stderr --- a/_py/cmdline/pycleanup.py +++ /dev/null @@ -1,47 +0,0 @@ -#!/usr/bin/env python - -"""\ -py.cleanup [PATH] - -Delete pyc file recursively, starting from PATH (which defaults to the current -working directory). Don't follow links and don't recurse into directories with -a ".". -""" -import py - -def main(): - parser = py.std.optparse.OptionParser(usage=__doc__) - parser.add_option("-e", "--remove", dest="ext", default=".pyc", action="store", - help="remove files with the given comma-separated list of extensions" - ) - parser.add_option("-n", "--dryrun", dest="dryrun", default=False, - action="store_true", - help="display would-be-removed filenames" - ) - parser.add_option("-d", action="store_true", dest="removedir", - help="remove empty directories") - (options, args) = parser.parse_args() - if not args: - args = ["."] - ext = options.ext.split(",") - def shouldremove(p): - return p.ext in ext - - for arg in args: - path = py.path.local(arg) - py.builtin.print_("cleaning path", path, "of extensions", ext) - for x in path.visit(shouldremove, lambda x: x.check(dotfile=0, link=0)): - remove(x, options) - if options.removedir: - for x in path.visit(lambda x: x.check(dir=1), - lambda x: x.check(dotfile=0, link=0)): - if not x.listdir(): - remove(x, options) - -def remove(path, options): - if options.dryrun: - py.builtin.print_("would remove", path) - else: - py.builtin.print_("removing", path) - path.remove() - --- a/_py/compat/dep_subprocess.py +++ /dev/null @@ -1,4 +0,0 @@ - -import py -py.log._apiwarn("1.1", "py.compat.subprocess deprecated, use standard library version.", stacklevel="initpkg") -subprocess = py.std.subprocess --- a/_py/code/oldmagic2.py +++ /dev/null @@ -1,6 +0,0 @@ - -import py - -py.log._apiwarn("1.1", "py.magic.AssertionError is deprecated, use py.code._AssertionError", stacklevel=2) - -from py.code import _AssertionError as AssertionError --- a/_py/path/local.py +++ /dev/null @@ -1,799 +0,0 @@ -""" -local path implementation. -""" -import sys, os, stat, re, atexit -import py -from _py.path import common - -iswin32 = sys.platform == "win32" - -class Stat(object): - def __getattr__(self, name): - return getattr(self._osstatresult, "st_" + name) - - def __init__(self, path, osstatresult): - self.path = path - self._osstatresult = osstatresult - - def owner(self): - if iswin32: - raise NotImplementedError("XXX win32") - import pwd - entry = py.error.checked_call(pwd.getpwuid, self.uid) - return entry[0] - owner = property(owner, None, None, "owner of path") - - def group(self): - """ return group name of file. """ - if iswin32: - raise NotImplementedError("XXX win32") - import grp - entry = py.error.checked_call(grp.getgrgid, self.gid) - return entry[0] - group = property(group) - -class PosixPath(common.PathBase): - def chown(self, user, group, rec=0): - """ change ownership to the given user and group. - user and group may be specified by a number or - by a name. if rec is True change ownership - recursively. - """ - uid = getuserid(user) - gid = getgroupid(group) - if rec: - for x in self.visit(rec=lambda x: x.check(link=0)): - if x.check(link=0): - py.error.checked_call(os.chown, str(x), uid, gid) - py.error.checked_call(os.chown, str(self), uid, gid) - - def readlink(self): - """ return value of a symbolic link. """ - return py.error.checked_call(os.readlink, self.strpath) - - def mklinkto(self, oldname): - """ posix style hard link to another name. """ - py.error.checked_call(os.link, str(oldname), str(self)) - - def mksymlinkto(self, value, absolute=1): - """ create a symbolic link with the given value (pointing to another name). """ - if absolute: - py.error.checked_call(os.symlink, str(value), self.strpath) - else: - base = self.common(value) - # with posix local paths '/' is always a common base - relsource = self.__class__(value).relto(base) - reldest = self.relto(base) - n = reldest.count(self.sep) - target = self.sep.join(('..', )*n + (relsource, )) - py.error.checked_call(os.symlink, target, self.strpath) - - def samefile(self, other): - """ return True if other refers to the same stat object as self. """ - return py.error.checked_call(os.path.samefile, str(self), str(other)) - -def getuserid(user): - import pwd - if not isinstance(user, int): - user = pwd.getpwnam(user)[2] - return user - -def getgroupid(group): - import grp - if not isinstance(group, int): - group = grp.getgrnam(group)[2] - return group - -FSBase = not iswin32 and PosixPath or common.PathBase - -class LocalPath(FSBase): - """ object oriented interface to os.path and other local filesystem - related information. - """ - sep = os.sep - class Checkers(common.Checkers): - def _stat(self): - try: - return self._statcache - except AttributeError: - try: - self._statcache = self.path.stat() - except py.error.ELOOP: - self._statcache = self.path.lstat() - return self._statcache - - def dir(self): - return stat.S_ISDIR(self._stat().mode) - - def file(self): - return stat.S_ISREG(self._stat().mode) - - def exists(self): - return self._stat() - - def link(self): - st = self.path.lstat() - return stat.S_ISLNK(st.mode) - - def __new__(cls, path=None): - """ Initialize and return a local Path instance. - - Path can be relative to the current directory. - If it is None then the current working directory is taken. - Note that Path instances always carry an absolute path. - Note also that passing in a local path object will simply return - the exact same path object. Use new() to get a new copy. - """ - if isinstance(path, common.PathBase): - if path.__class__ == cls: - return path - path = path.strpath - # initialize the path - self = object.__new__(cls) - if not path: - self.strpath = os.getcwd() - elif isinstance(path, py.builtin._basestring): - self.strpath = os.path.abspath(os.path.normpath(str(path))) - else: - raise ValueError("can only pass None, Path instances " - "or non-empty strings to LocalPath") - assert isinstance(self.strpath, str) - return self - - def __hash__(self): - return hash(self.strpath) - - def __eq__(self, other): - s1 = str(self) - s2 = str(other) - if iswin32: - s1 = s1.lower() - s2 = s2.lower() - return s1 == s2 - - def __ne__(self, other): - return not (self == other) - - def __lt__(self, other): - return str(self) < str(other) - - def remove(self, rec=1): - """ remove a file or directory (or a directory tree if rec=1). """ - if self.check(dir=1, link=0): - if rec: - # force remove of readonly files on windows - if iswin32: - self.chmod(448, rec=1) # octcal 0700 - py.error.checked_call(py.std.shutil.rmtree, self.strpath) - else: - py.error.checked_call(os.rmdir, self.strpath) - else: - if iswin32: - self.chmod(448) # octcal 0700 - py.error.checked_call(os.remove, self.strpath) - - def computehash(self, hashtype="md5", chunksize=524288): - """ return hexdigest of hashvalue for this file. """ - try: - try: - import hashlib as mod - except ImportError: - if hashtype == "sha1": - hashtype = "sha" - mod = __import__(hashtype) - hash = getattr(mod, hashtype)() - except (AttributeError, ImportError): - raise ValueError("Don't know how to compute %r hash" %(hashtype,)) - f = self.open('rb') - try: - while 1: - buf = f.read(chunksize) - if not buf: - return hash.hexdigest() - hash.update(buf) - finally: - f.close() - - def new(self, **kw): - """ create a modified version of this path. - the following keyword arguments modify various path parts: - - a:/some/path/to/a/file.ext - || drive - |-------------| dirname - |------| basename - |--| purebasename - |--| ext - """ - obj = object.__new__(self.__class__) - drive, dirname, basename, purebasename,ext = self._getbyspec( - "drive,dirname,basename,purebasename,ext") - if 'basename' in kw: - if 'purebasename' in kw or 'ext' in kw: - raise ValueError("invalid specification %r" % kw) - else: - pb = kw.setdefault('purebasename', purebasename) - try: - ext = kw['ext'] - except KeyError: - pass - else: - if ext and not ext.startswith('.'): - ext = '.' + ext - kw['basename'] = pb + ext - - kw.setdefault('drive', drive) - kw.setdefault('dirname', dirname) - kw.setdefault('sep', self.sep) - obj.strpath = os.path.normpath( - "%(drive)s%(dirname)s%(sep)s%(basename)s" % kw) - return obj - - def _getbyspec(self, spec): - """ return a sequence of specified path parts. 'spec' is - a comma separated string containing path part names. - according to the following convention: - a:/some/path/to/a/file.ext - || drive - |-------------| dirname - |------| basename - |--| purebasename - |--| ext - """ - res = [] - parts = self.strpath.split(self.sep) - - args = filter(None, spec.split(',') ) - append = res.append - for name in args: - if name == 'drive': - append(parts[0]) - elif name == 'dirname': - append(self.sep.join(['']+parts[1:-1])) - else: - basename = parts[-1] - if name == 'basename': - append(basename) - else: - i = basename.rfind('.') - if i == -1: - purebasename, ext = basename, '' - else: - purebasename, ext = basename[:i], basename[i:] - if name == 'purebasename': - append(purebasename) - elif name == 'ext': - append(ext) - else: - raise ValueError("invalid part specification %r" % name) - return res - - def join(self, *args, **kwargs): - """ return a new path by appending all 'args' as path - components. if abs=1 is used restart from root if any - of the args is an absolute path. - """ - if not args: - return self - strpath = self.strpath - sep = self.sep - strargs = [str(x) for x in args] - if kwargs.get('abs', 0): - for i in range(len(strargs)-1, -1, -1): - if os.path.isabs(strargs[i]): - strpath = strargs[i] - strargs = strargs[i+1:] - break - for arg in strargs: - arg = arg.strip(sep) - if iswin32: - # allow unix style paths even on windows. - arg = arg.strip('/') - arg = arg.replace('/', sep) - if arg: - if not strpath.endswith(sep): - strpath += sep - strpath += arg - obj = self.new() - obj.strpath = os.path.normpath(strpath) - return obj - - def open(self, mode='r'): - """ return an opened file with the given mode. """ - return py.error.checked_call(open, self.strpath, mode) - - def listdir(self, fil=None, sort=None): - """ list directory contents, possibly filter by the given fil func - and possibly sorted. - """ - if isinstance(fil, str): - fil = common.FNMatcher(fil) - res = [] - for name in py.error.checked_call(os.listdir, self.strpath): - childurl = self.join(name) - if fil is None or fil(childurl): - res.append(childurl) - self._sortlist(res, sort) - return res - - def size(self): - """ return size of the underlying file object """ - return self.stat().size - - def mtime(self): - """ return last modification time of the path. """ - return self.stat().mtime - - def copy(self, target, archive=False): - """ copy path to target.""" - assert not archive, "XXX archive-mode not supported" - if self.check(file=1): - if target.check(dir=1): - target = target.join(self.basename) - assert self!=target - copychunked(self, target) - else: - def rec(p): - return p.check(link=0) - for x in self.visit(rec=rec): - relpath = x.relto(self) - newx = target.join(relpath) - newx.dirpath().ensure(dir=1) - if x.check(link=1): - newx.mksymlinkto(x.readlink()) - elif x.check(file=1): - copychunked(x, newx) - elif x.check(dir=1): - newx.ensure(dir=1) - - def rename(self, target): - """ rename this path to target. """ - return py.error.checked_call(os.rename, str(self), str(target)) - - def dump(self, obj, bin=1): - """ pickle object into path location""" - f = self.open('wb') - try: - py.error.checked_call(py.std.pickle.dump, obj, f, bin) - finally: - f.close() - - def mkdir(self, *args): - """ create & return the directory joined with args. """ - p = self.join(*args) - py.error.checked_call(os.mkdir, str(p)) - return p - - def write(self, data, mode='w'): - """ write data into path. """ - if 'b' in mode: - if not py.builtin._isbytes(data): - raise ValueError("can only process bytes") - else: - if not py.builtin._istext(data): - if not py.builtin._isbytes(data): - data = str(data) - else: - data = py.builtin._totext(data, sys.getdefaultencoding()) - f = self.open(mode) - try: - f.write(data) - finally: - f.close() - - def _ensuredirs(self): - parent = self.dirpath() - if parent == self: - return self - if parent.check(dir=0): - parent._ensuredirs() - if self.check(dir=0): - try: - self.mkdir() - except py.error.EEXIST: - # race condition: file/dir created by another thread/process. - # complain if it is not a dir - if self.check(dir=0): - raise - return self - - def ensure(self, *args, **kwargs): - """ ensure that an args-joined path exists (by default as - a file). if you specify a keyword argument 'dir=True' - then the path is forced to be a directory path. - """ - p = self.join(*args) - if kwargs.get('dir', 0): - return p._ensuredirs() - else: - p.dirpath()._ensuredirs() - if not p.check(file=1): - p.open('w').close() - return p - - def stat(self): - """ Return an os.stat() tuple. """ - return Stat(self, py.error.checked_call(os.stat, self.strpath)) - - def lstat(self): - """ Return an os.lstat() tuple. """ - return Stat(self, py.error.checked_call(os.lstat, self.strpath)) - - def setmtime(self, mtime=None): - """ set modification time for the given path. if 'mtime' is None - (the default) then the file's mtime is set to current time. - - Note that the resolution for 'mtime' is platform dependent. - """ - if mtime is None: - return py.error.checked_call(os.utime, self.strpath, mtime) - try: - return py.error.checked_call(os.utime, self.strpath, (-1, mtime)) - except py.error.EINVAL: - return py.error.checked_call(os.utime, self.strpath, (self.atime(), mtime)) - - def chdir(self): - """ change directory to self and return old current directory """ - old = self.__class__() - py.error.checked_call(os.chdir, self.strpath) - return old - - def realpath(self): - """ return a new path which contains no symbolic links.""" - return self.__class__(os.path.realpath(self.strpath)) - - def atime(self): - """ return last access time of the path. """ - return self.stat().atime - - def __repr__(self): - return 'local(%r)' % self.strpath - - def __str__(self): - """ return string representation of the Path. """ - return self.strpath - - def pypkgpath(self, pkgname=None): - """ return the path's package path by looking for the given - pkgname. If pkgname is None then look for the last - directory upwards which still contains an __init__.py. - Return None if a pkgpath can not be determined. - """ - pkgpath = None - for parent in self.parts(reverse=True): - if pkgname is None: - if parent.check(file=1): - continue - if parent.join('__init__.py').check(): - pkgpath = parent - continue - return pkgpath - else: - if parent.basename == pkgname: - return parent - return pkgpath - - def _prependsyspath(self, path): - s = str(path) - if s != sys.path[0]: - #print "prepending to sys.path", s - sys.path.insert(0, s) - - def chmod(self, mode, rec=0): - """ change permissions to the given mode. If mode is an - integer it directly encodes the os-specific modes. - if rec is True perform recursively. - """ - if not isinstance(mode, int): - raise TypeError("mode %r must be an integer" % (mode,)) - if rec: - for x in self.visit(rec=rec): - py.error.checked_call(os.chmod, str(x), mode) - py.error.checked_call(os.chmod, str(self), mode) - - def pyimport(self, modname=None, ensuresyspath=True): - """ return path as an imported python module. - if modname is None, look for the containing package - and construct an according module name. - The module will be put/looked up in sys.modules. - """ - if not self.check(): - raise py.error.ENOENT(self) - #print "trying to import", self - pkgpath = None - if modname is None: - pkgpath = self.pypkgpath() - if pkgpath is not None: - if ensuresyspath: - self._prependsyspath(pkgpath.dirpath()) - pkg = __import__(pkgpath.basename, None, None, []) - names = self.new(ext='').relto(pkgpath.dirpath()) - names = names.split(self.sep) - modname = ".".join(names) - else: - # no package scope, still make it possible - if ensuresyspath: - self._prependsyspath(self.dirpath()) - modname = self.purebasename - mod = __import__(modname, None, None, ['__doc__']) - modfile = mod.__file__ - if modfile[-4:] in ('.pyc', '.pyo'): - modfile = modfile[:-1] - elif modfile.endswith('$py.class'): - modfile = modfile[:-9] + '.py' - if not self.samefile(modfile): - raise EnvironmentError("mismatch:\n" - "imported module %r\n" - "does not stem from %r\n" - "maybe __init__.py files are missing?" % (mod, str(self))) - return mod - else: - try: - return sys.modules[modname] - except KeyError: - # we have a custom modname, do a pseudo-import - mod = py.std.types.ModuleType(modname) - mod.__file__ = str(self) - sys.modules[modname] = mod - try: - py.builtin.execfile(str(self), mod.__dict__) - except: - del sys.modules[modname] - raise - return mod - - def sysexec(self, *argv, **popen_opts): - """ return stdout text from executing a system child process, - where the 'self' path points to executable. - The process is directly invoked and not through a system shell. - """ - from subprocess import Popen, PIPE - argv = map(str, argv) - popen_opts['stdout'] = popen_opts['stderr'] = PIPE - proc = Popen([str(self)] + list(argv), **popen_opts) - stdout, stderr = proc.communicate() - ret = proc.wait() - if py.builtin._isbytes(stdout): - stdout = py.builtin._totext(stdout, sys.getdefaultencoding()) - if ret != 0: - if py.builtin._isbytes(stderr): - stderr = py.builtin._totext(stderr, sys.getdefaultencoding()) - raise py.process.cmdexec.Error(ret, ret, str(self), - stdout, stderr,) - return stdout - - def sysfind(cls, name, checker=None): - """ return a path object found by looking at the systems - underlying PATH specification. If the checker is not None - it will be invoked to filter matching paths. If a binary - cannot be found, None is returned - Note: This is probably not working on plain win32 systems - but may work on cygwin. - """ - if os.path.isabs(name): - p = py.path.local(name) - if p.check(file=1): - return p - else: - if iswin32: - paths = py.std.os.environ['Path'].split(';') - if '' not in paths and '.' not in paths: - paths.append('.') - try: - systemroot = os.environ['SYSTEMROOT'] - except KeyError: - pass - else: - paths = [re.sub('%SystemRoot%', systemroot, path) - for path in paths] - tryadd = '', '.exe', '.com', '.bat' # XXX add more? - else: - paths = py.std.os.environ['PATH'].split(':') - tryadd = ('',) - - for x in paths: - for addext in tryadd: - p = py.path.local(x).join(name, abs=True) + addext - try: - if p.check(file=1): - if checker: - if not checker(p): - continue - return p - except py.error.EACCES: - pass - return None - sysfind = classmethod(sysfind) - - def _gethomedir(cls): - try: - x = os.environ['HOME'] - except KeyError: - x = os.environ['HOMEPATH'] - return cls(x) - _gethomedir = classmethod(_gethomedir) - - #""" - #special class constructors for local filesystem paths - #""" - def get_temproot(cls): - """ return the system's temporary directory - (where tempfiles are usually created in) - """ - return py.path.local(py.std.tempfile.gettempdir()) - get_temproot = classmethod(get_temproot) - - def mkdtemp(cls): - """ return a Path object pointing to a fresh new temporary directory - (which we created ourself). - """ - import tempfile - tries = 10 - for i in range(tries): - dname = tempfile.mktemp() - dpath = cls(tempfile.mktemp()) - try: - dpath.mkdir() - except (py.error.EEXIST, py.error.EPERM, py.error.EACCES): - continue - return dpath - raise py.error.ENOENT(dpath, "could not create tempdir, %d tries" % tries) - mkdtemp = classmethod(mkdtemp) - - def make_numbered_dir(cls, prefix='session-', rootdir=None, keep=3, - lock_timeout = 172800): # two days - """ return unique directory with a number greater than the current - maximum one. The number is assumed to start directly after prefix. - if keep is true directories with a number less than (maxnum-keep) - will be removed. - """ - if rootdir is None: - rootdir = cls.get_temproot() - - def parse_num(path): - """ parse the number out of a path (if it matches the prefix) """ - bn = path.basename - if bn.startswith(prefix): - try: - return int(bn[len(prefix):]) - except ValueError: - pass - - # compute the maximum number currently in use with the - # prefix - lastmax = None - while True: - maxnum = -1 - for path in rootdir.listdir(): - num = parse_num(path) - if num is not None: - maxnum = max(maxnum, num) - - # make the new directory - try: - udir = rootdir.mkdir(prefix + str(maxnum+1)) - except py.error.EEXIST: - # race condition: another thread/process created the dir - # in the meantime. Try counting again - if lastmax == maxnum: - raise - lastmax = maxnum - continue - break - - # put a .lock file in the new directory that will be removed at - # process exit - if lock_timeout: - lockfile = udir.join('.lock') - mypid = os.getpid() - if hasattr(lockfile, 'mksymlinkto'): - lockfile.mksymlinkto(str(mypid)) - else: - lockfile.write(str(mypid)) - def try_remove_lockfile(): - # in a fork() situation, only the last process should - # remove the .lock, otherwise the other processes run the - # risk of seeing their temporary dir disappear. For now - # we remove the .lock in the parent only (i.e. we assume - # that the children finish before the parent). - if os.getpid() != mypid: - return - try: - lockfile.remove() - except py.error.Error: - pass - atexit.register(try_remove_lockfile) - - # prune old directories - if keep: - for path in rootdir.listdir(): - num = parse_num(path) - if num is not None and num <= (maxnum - keep): - lf = path.join('.lock') - try: - t1 = lf.lstat().mtime - t2 = lockfile.lstat().mtime - if not lock_timeout or abs(t2-t1) < lock_timeout: - continue # skip directories still locked - except py.error.Error: - pass # assume that it means that there is no 'lf' - try: - path.remove(rec=1) - except KeyboardInterrupt: - raise - except: # this might be py.error.Error, WindowsError ... - pass - - # make link... - try: - username = os.environ['USER'] #linux, et al - except KeyError: - try: - username = os.environ['USERNAME'] #windows - except KeyError: - username = 'current' - - src = str(udir) - dest = src[:src.rfind('-')] + '-' + username - try: - os.unlink(dest) - except OSError: - pass - try: - os.symlink(src, dest) - except (OSError, AttributeError): # AttributeError on win32 - pass - - return udir - make_numbered_dir = classmethod(make_numbered_dir) - -def copychunked(src, dest): - chunksize = 524288 # half a meg of bytes - fsrc = src.open('rb') - try: - fdest = dest.open('wb') - try: - while 1: - buf = fsrc.read(chunksize) - if not buf: - break - fdest.write(buf) - finally: - fdest.close() - finally: - fsrc.close() - -def autopath(globs=None): - """ (deprecated) return the (local) path of the "current" file pointed to by globals or - if it is none - alternatively the callers frame globals. - - the path will always point to a .py file or to None. - the path will have the following payload: - pkgdir is the last parent directory path containing __init__.py - """ - py.log._apiwarn("1.1", "py.magic.autopath deprecated, " - "use py.path.local(__file__) and maybe pypkgpath/pyimport().") - if globs is None: - globs = sys._getframe(1).f_globals - try: - __file__ = globs['__file__'] - except KeyError: - if not sys.argv[0]: - raise ValueError("cannot compute autopath in interactive mode") - __file__ = os.path.abspath(sys.argv[0]) - - ret = py.path.local(__file__) - if ret.ext in ('.pyc', '.pyo'): - ret = ret.new(ext='.py') - current = pkgdir = ret.dirpath() - while 1: - if current.join('__init__.py').check(): - pkgdir = current - current = current.dirpath() - if pkgdir != current: - continue - elif str(current) not in sys.path: - sys.path.insert(0, str(current)) - break - ret.pkgdir = pkgdir - return ret - --- a/_py/compat/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -""" compatibility modules (taken from 2.4.4) """ - --- a/_py/cmdline/__init__.py +++ /dev/null @@ -1,1 +0,0 @@ -# --- a/_py/cmdline/pysvnwcrevert.py +++ /dev/null @@ -1,55 +0,0 @@ -#! /usr/bin/env python -"""\ -py.svnwcrevert [options] WCPATH - -Running this script and then 'svn up' puts the working copy WCPATH in a state -as clean as a fresh check-out. - -WARNING: you'll loose all local changes, obviously! - -This script deletes all files that have been modified -or that svn doesn't explicitly know about, including svn:ignored files -(like .pyc files, hint hint). - -The goal of this script is to leave the working copy with some files and -directories possibly missing, but - most importantly - in a state where -the following 'svn up' won't just crash. -""" - -import sys, py - -def kill(p, root): - print('< %s' % (p.relto(root),)) - p.remove(rec=1) - -def svnwcrevert(path, root=None, precious=[]): - if root is None: - root = path - wcpath = py.path.svnwc(path) - try: - st = wcpath.status() - except ValueError: # typically, "bad char in wcpath" - kill(path, root) - return - for p in path.listdir(): - if p.basename == '.svn' or p.basename in precious: - continue - wcp = py.path.svnwc(p) - if wcp not in st.unchanged and wcp not in st.external: - kill(p, root) - elif p.check(dir=1): - svnwcrevert(p, root) - -# XXX add a functional test - -parser = py.std.optparse.OptionParser(usage=__doc__) -parser.add_option("-p", "--precious", - action="append", dest="precious", default=[], - help="preserve files with this name") - -def main(): - opts, args = parser.parse_args() - if len(args) != 1: - parser.print_help() - sys.exit(2) - svnwcrevert(py.path.local(args[0]), precious=opts.precious) --- a/_py/compat/dep_doctest.py +++ /dev/null @@ -1,4 +0,0 @@ -import py - -py.log._apiwarn("1.1", "py.compat.doctest deprecated, use standard library version.", stacklevel="initpkg") -doctest = py.std.doctest --- a/_py/_metainfo.py +++ /dev/null @@ -1,5 +0,0 @@ - -import py -import _py - -impldir = py.path.local(_py.__file__).dirpath() --- a/_py/path/gateway/channeltest.py +++ /dev/null @@ -1,65 +0,0 @@ -import threading - - -class PathServer: - - def __init__(self, channel): - self.channel = channel - self.C2P = {} - self.next_id = 0 - threading.Thread(target=self.serve).start() - - def p2c(self, path): - id = self.next_id - self.next_id += 1 - self.C2P[id] = path - return id - - def command_LIST(self, id, *args): - path = self.C2P[id] - answer = [(self.p2c(p), p.basename) for p in path.listdir(*args)] - self.channel.send(answer) - - def command_DEL(self, id): - del self.C2P[id] - - def command_GET(self, id, spec): - path = self.C2P[id] - self.channel.send(path._getbyspec(spec)) - - def command_READ(self, id): - path = self.C2P[id] - self.channel.send(path.read()) - - def command_JOIN(self, id, resultid, *args): - path = self.C2P[id] - assert resultid not in self.C2P - self.C2P[resultid] = path.join(*args) - - def command_DIRPATH(self, id, resultid): - path = self.C2P[id] - assert resultid not in self.C2P - self.C2P[resultid] = path.dirpath() - - def serve(self): - try: - while 1: - msg = self.channel.receive() - meth = getattr(self, 'command_' + msg[0]) - meth(*msg[1:]) - except EOFError: - pass - -if __name__ == '__main__': - import py - gw = execnet.PopenGateway() - channel = gw._channelfactory.new() - srv = PathServer(channel) - c = gw.remote_exec(""" - import remotepath - p = remotepath.RemotePath(channel.receive(), channel.receive()) - channel.send(len(p.listdir())) - """) - c.send(channel) - c.send(srv.p2c(py.path.local('/tmp'))) - print(c.receive()) --- a/_py/io/capture.py +++ /dev/null @@ -1,344 +0,0 @@ -import os -import sys -import py -import tempfile - -try: - from io import StringIO -except ImportError: - from StringIO import StringIO - -if sys.version_info < (3,0): - class TextIO(StringIO): - def write(self, data): - if not isinstance(data, unicode): - data = unicode(data, getattr(self, '_encoding', 'UTF-8')) - StringIO.write(self, data) -else: - TextIO = StringIO - -try: - from io import BytesIO -except ImportError: - class BytesIO(StringIO): - def write(self, data): - if isinstance(data, unicode): - raise TypeError("not a byte value: %r" %(data,)) - StringIO.write(self, data) - -class FDCapture: - """ Capture IO to/from a given os-level filedescriptor. """ - - def __init__(self, targetfd, tmpfile=None): - """ save targetfd descriptor, and open a new - temporary file there. If no tmpfile is - specified a tempfile.Tempfile() will be opened - in text mode. - """ - self.targetfd = targetfd - if tmpfile is None: - f = tempfile.TemporaryFile('wb+') - tmpfile = dupfile(f, encoding="UTF-8") - f.close() - self.tmpfile = tmpfile - self._savefd = os.dup(targetfd) - os.dup2(self.tmpfile.fileno(), targetfd) - self._patched = [] - - def setasfile(self, name, module=sys): - """ patch . to self.tmpfile - """ - key = (module, name) - self._patched.append((key, getattr(module, name))) - setattr(module, name, self.tmpfile) - - def unsetfiles(self): - """ unpatch all patched items - """ - while self._patched: - (module, name), value = self._patched.pop() - setattr(module, name, value) - - def done(self): - """ unpatch and clean up, returns the self.tmpfile (file object) - """ - os.dup2(self._savefd, self.targetfd) - self.unsetfiles() - os.close(self._savefd) - self.tmpfile.seek(0) - return self.tmpfile - - def writeorg(self, data): - """ write a string to the original file descriptor - """ - tempfp = tempfile.TemporaryFile() - try: - os.dup2(self._savefd, tempfp.fileno()) - tempfp.write(data) - finally: - tempfp.close() - - -def dupfile(f, mode=None, buffering=0, raising=False, encoding=None): - """ return a new open file object that's a duplicate of f - - mode is duplicated if not given, 'buffering' controls - buffer size (defaulting to no buffering) and 'raising' - defines whether an exception is raised when an incompatible - file object is passed in (if raising is False, the file - object itself will be returned) - """ - try: - fd = f.fileno() - except AttributeError: - if raising: - raise - return f - newfd = os.dup(fd) - mode = mode and mode or f.mode - if sys.version_info >= (3,0): - if encoding is not None: - mode = mode.replace("b", "") - buffering = True - return os.fdopen(newfd, mode, buffering, encoding, closefd=False) - else: - f = os.fdopen(newfd, mode, buffering) - if encoding is not None: - return EncodedFile(f, encoding) - return f - -class EncodedFile(object): - def __init__(self, _stream, encoding): - self._stream = _stream - self.encoding = encoding - - def write(self, obj): - if isinstance(obj, unicode): - obj = obj.encode(self.encoding) - elif isinstance(obj, str): - pass - else: - obj = str(obj) - self._stream.write(obj) - - def writelines(self, linelist): - data = ''.join(linelist) - self.write(data) - - def __getattr__(self, name): - return getattr(self._stream, name) - -class Capture(object): - def call(cls, func, *args, **kwargs): - """ return a (res, out, err) tuple where - out and err represent the output/error output - during function execution. - call the given function with args/kwargs - and capture output/error during its execution. - """ - so = cls() - try: - res = func(*args, **kwargs) - finally: - out, err = so.reset() - return res, out, err - call = classmethod(call) - - def reset(self): - """ reset sys.stdout/stderr and return captured output as strings. """ - if hasattr(self, '_suspended'): - outfile = self._kwargs['out'] - errfile = self._kwargs['err'] - del self._kwargs - else: - outfile, errfile = self.done() - out, err = "", "" - if outfile: - out = outfile.read() - outfile.close() - if errfile and errfile != outfile: - err = errfile.read() - errfile.close() - return out, err - - def suspend(self): - """ return current snapshot captures, memorize tempfiles. """ - assert not hasattr(self, '_suspended') - self._suspended = True - outerr = self.readouterr() - outfile, errfile = self.done() - self._kwargs['out'] = outfile - self._kwargs['err'] = errfile - return outerr - - def resume(self): - """ resume capturing with original temp files. """ - assert self._suspended - self._initialize(**self._kwargs) - del self._suspended - - -class StdCaptureFD(Capture): - """ This class allows to capture writes to FD1 and FD2 - and may connect a NULL file to FD0 (and prevent - reads from sys.stdin) - """ - def __init__(self, out=True, err=True, - mixed=False, in_=True, patchsys=True): - self._kwargs = locals().copy() - del self._kwargs['self'] - self._initialize(**self._kwargs) - - def _initialize(self, out=True, err=True, - mixed=False, in_=True, patchsys=True): - if in_: - self._oldin = (sys.stdin, os.dup(0)) - sys.stdin = DontReadFromInput() - fd = os.open(devnullpath, os.O_RDONLY) - os.dup2(fd, 0) - os.close(fd) - if out: - tmpfile = None - if hasattr(out, 'write'): - tmpfile = out - self.out = py.io.FDCapture(1, tmpfile=tmpfile) - if patchsys: - self.out.setasfile('stdout') - if err: - if mixed and out: - tmpfile = self.out.tmpfile - elif hasattr(err, 'write'): - tmpfile = err - else: - tmpfile = None - self.err = py.io.FDCapture(2, tmpfile=tmpfile) - if patchsys: - self.err.setasfile('stderr') - - def done(self): - """ return (outfile, errfile) and stop capturing. """ - if hasattr(self, 'out'): - outfile = self.out.done() - else: - outfile = None - if hasattr(self, 'err'): - errfile = self.err.done() - else: - errfile = None - if hasattr(self, '_oldin'): - oldsys, oldfd = self._oldin - os.dup2(oldfd, 0) - os.close(oldfd) - sys.stdin = oldsys - return outfile, errfile - - def readouterr(self): - """ return snapshot value of stdout/stderr capturings. """ - l = [] - for name in ('out', 'err'): - res = "" - if hasattr(self, name): - f = getattr(self, name).tmpfile - f.seek(0) - res = f.read() - f.truncate(0) - f.seek(0) - l.append(res) - return l - -class StdCapture(Capture): - """ This class allows to capture writes to sys.stdout|stderr "in-memory" - and will raise errors on tries to read from sys.stdin. It only - modifies sys.stdout|stderr|stdin attributes and does not - touch underlying File Descriptors (use StdCaptureFD for that). - """ - def __init__(self, out=True, err=True, in_=True, mixed=False): - self._kwargs = locals().copy() - del self._kwargs['self'] - self._initialize(**self._kwargs) - - def _initialize(self, out, err, in_, mixed): - self._out = out - self._err = err - self._in = in_ - if out: - self._oldout = sys.stdout - if not hasattr(out, 'write'): - out = TextIO() - sys.stdout = self.out = out - if err: - self._olderr = sys.stderr - if out and mixed: - err = self.out - elif not hasattr(err, 'write'): - err = TextIO() - sys.stderr = self.err = err - if in_: - self._oldin = sys.stdin - sys.stdin = self.newin = DontReadFromInput() - - def done(self): - """ return (outfile, errfile) and stop capturing. """ - o,e = sys.stdout, sys.stderr - if self._out: - try: - sys.stdout = self._oldout - except AttributeError: - raise IOError("stdout capturing already reset") - del self._oldout - outfile = self.out - outfile.seek(0) - else: - outfile = None - if self._err: - try: - sys.stderr = self._olderr - except AttributeError: - raise IOError("stderr capturing already reset") - del self._olderr - errfile = self.err - errfile.seek(0) - else: - errfile = None - if self._in: - sys.stdin = self._oldin - return outfile, errfile - - def readouterr(self): - """ return snapshot value of stdout/stderr capturings. """ - out = err = "" - if self._out: - out = sys.stdout.getvalue() - sys.stdout.truncate(0) - if self._err: - err = sys.stderr.getvalue() - sys.stderr.truncate(0) - return out, err - -class DontReadFromInput: - """Temporary stub class. Ideally when stdin is accessed, the - capturing should be turned off, with possibly all data captured - so far sent to the screen. This should be configurable, though, - because in automated test runs it is better to crash than - hang indefinitely. - """ - def read(self, *args): - raise IOError("reading from stdin while output is captured") - readline = read - readlines = read - __iter__ = read - - def fileno(self): - raise ValueError("redirected Stdin is pseudofile, has no fileno()") - def isatty(self): - return False - -try: - devnullpath = os.devnull -except AttributeError: - if os.name == 'nt': - devnullpath = 'NUL' - else: - devnullpath = '/dev/null' - - --- a/_py/cmdline/pywhich.py +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env python - -"""\ -py.which [name] - -print the location of the given python module or package name -""" - -import sys - -def main(): - name = sys.argv[1] - try: - mod = __import__(name) - except ImportError: - sys.stderr.write("could not import: " + name + "\n") - else: - try: - location = mod.__file__ - except AttributeError: - sys.stderr.write("module (has no __file__): " + str(mod)) - else: - print(location) --- a/_py/apipkg.py +++ /dev/null @@ -1,69 +0,0 @@ -""" -apipkg: control the exported namespace of a python package. - -see http://pypi.python.org/pypi/apipkg - -(c) holger krekel, 2009 - MIT license -""" -import sys -from types import ModuleType - -__version__ = "1.0b2" - -def initpkg(pkgname, exportdefs): - """ initialize given package from the export definitions. """ - mod = ApiModule(pkgname, exportdefs, implprefix=pkgname) - oldmod = sys.modules[pkgname] - mod.__file__ = getattr(oldmod, '__file__', None) - mod.__version__ = getattr(oldmod, '__version__', None) - mod.__path__ = getattr(oldmod, '__path__', None) - sys.modules[pkgname] = mod - -def importobj(modpath, attrname): - module = __import__(modpath, None, None, ['__doc__']) - return getattr(module, attrname) - -class ApiModule(ModuleType): - def __init__(self, name, importspec, implprefix=None): - self.__name__ = name - self.__all__ = list(importspec) - self.__map__ = {} - self.__implprefix__ = implprefix or name - for name, importspec in importspec.items(): - if isinstance(importspec, dict): - subname = '%s.%s'%(self.__name__, name) - apimod = ApiModule(subname, importspec, implprefix) - sys.modules[subname] = apimod - setattr(self, name, apimod) - else: - modpath, attrname = importspec.split(':') - if modpath[0] == '.': - modpath = implprefix + modpath - if name == '__doc__': - self.__doc__ = importobj(modpath, attrname) - else: - self.__map__[name] = (modpath, attrname) - - def __repr__(self): - return '' % (self.__name__,) - - def __getattr__(self, name): - try: - modpath, attrname = self.__map__[name] - except KeyError: - raise AttributeError(name) - else: - result = importobj(modpath, attrname) - setattr(self, name, result) - del self.__map__[name] - return result - - def __dict__(self): - # force all the content of the module to be loaded when __dict__ is read - dictdescr = ModuleType.__dict__['__dict__'] - dict = dictdescr.__get__(self) - if dict is not None: - for name in self.__all__: - hasattr(self, name) # force attribute load, ignore errors - return dict - __dict__ = property(__dict__) --- a/_py/path/svnwc.py +++ /dev/null @@ -1,1236 +0,0 @@ -""" -svn-Command based Implementation of a Subversion WorkingCopy Path. - - SvnWCCommandPath is the main class. - -""" - -import os, sys, time, re, calendar -import py -import subprocess -from _py.path import common - -#----------------------------------------------------------- -# Caching latest repository revision and repo-paths -# (getting them is slow with the current implementations) -# -# XXX make mt-safe -#----------------------------------------------------------- - -class cache: - proplist = {} - info = {} - entries = {} - prop = {} - -class RepoEntry: - def __init__(self, url, rev, timestamp): - self.url = url - self.rev = rev - self.timestamp = timestamp - - def __str__(self): - return "repo: %s;%s %s" %(self.url, self.rev, self.timestamp) - -class RepoCache: - """ The Repocache manages discovered repository paths - and their revisions. If inside a timeout the cache - will even return the revision of the root. - """ - timeout = 20 # seconds after which we forget that we know the last revision - - def __init__(self): - self.repos = [] - - def clear(self): - self.repos = [] - - def put(self, url, rev, timestamp=None): - if rev is None: - return - if timestamp is None: - timestamp = time.time() - - for entry in self.repos: - if url == entry.url: - entry.timestamp = timestamp - entry.rev = rev - #print "set repo", entry - break - else: - entry = RepoEntry(url, rev, timestamp) - self.repos.append(entry) - #print "appended repo", entry - - def get(self, url): - now = time.time() - for entry in self.repos: - if url.startswith(entry.url): - if now < entry.timestamp + self.timeout: - #print "returning immediate Etrny", entry - return entry.url, entry.rev - return entry.url, -1 - return url, -1 - -repositories = RepoCache() - - -# svn support code - -ALLOWED_CHARS = "_ -/\\=$.~+" #add characters as necessary when tested -if sys.platform == "win32": - ALLOWED_CHARS += ":" -ALLOWED_CHARS_HOST = ALLOWED_CHARS + '@:' - -def _getsvnversion(ver=[]): - try: - return ver[0] - except IndexError: - v = py.process.cmdexec("svn -q --version") - v.strip() - v = '.'.join(v.split('.')[:2]) - ver.append(v) - return v - -def _escape_helper(text): - text = str(text) - if py.std.sys.platform != 'win32': - text = str(text).replace('$', '\\$') - return text - -def _check_for_bad_chars(text, allowed_chars=ALLOWED_CHARS): - for c in str(text): - if c.isalnum(): - continue - if c in allowed_chars: - continue - return True - return False - -def checkbadchars(url): - # (hpk) not quite sure about the exact purpose, guido w.? - proto, uri = url.split("://", 1) - if proto != "file": - host, uripath = uri.split('/', 1) - # only check for bad chars in the non-protocol parts - if (_check_for_bad_chars(host, ALLOWED_CHARS_HOST) \ - or _check_for_bad_chars(uripath, ALLOWED_CHARS)): - raise ValueError("bad char in %r" % (url, )) - - -#_______________________________________________________________ - -class SvnPathBase(common.PathBase): - """ Base implementation for SvnPath implementations. """ - sep = '/' - - def _geturl(self): - return self.strpath - url = property(_geturl, None, None, "url of this svn-path.") - - def __str__(self): - """ return a string representation (including rev-number) """ - return self.strpath - - def __hash__(self): - return hash(self.strpath) - - def new(self, **kw): - """ create a modified version of this path. A 'rev' argument - indicates a new revision. - the following keyword arguments modify various path parts: - - http://host.com/repo/path/file.ext - |-----------------------| dirname - |------| basename - |--| purebasename - |--| ext - """ - obj = object.__new__(self.__class__) - obj.rev = kw.get('rev', self.rev) - obj.auth = kw.get('auth', self.auth) - dirname, basename, purebasename, ext = self._getbyspec( - "dirname,basename,purebasename,ext") - if 'basename' in kw: - if 'purebasename' in kw or 'ext' in kw: - raise ValueError("invalid specification %r" % kw) - else: - pb = kw.setdefault('purebasename', purebasename) - ext = kw.setdefault('ext', ext) - if ext and not ext.startswith('.'): - ext = '.' + ext - kw['basename'] = pb + ext - - kw.setdefault('dirname', dirname) - kw.setdefault('sep', self.sep) - if kw['basename']: - obj.strpath = "%(dirname)s%(sep)s%(basename)s" % kw - else: - obj.strpath = "%(dirname)s" % kw - return obj - - def _getbyspec(self, spec): - """ get specified parts of the path. 'arg' is a string - with comma separated path parts. The parts are returned - in exactly the order of the specification. - - you may specify the following parts: - - http://host.com/repo/path/file.ext - |-----------------------| dirname - |------| basename - |--| purebasename - |--| ext - """ - res = [] - parts = self.strpath.split(self.sep) - for name in spec.split(','): - name = name.strip() - if name == 'dirname': - res.append(self.sep.join(parts[:-1])) - elif name == 'basename': - res.append(parts[-1]) - else: - basename = parts[-1] - i = basename.rfind('.') - if i == -1: - purebasename, ext = basename, '' - else: - purebasename, ext = basename[:i], basename[i:] - if name == 'purebasename': - res.append(purebasename) - elif name == 'ext': - res.append(ext) - else: - raise NameError("Don't know part %r" % name) - return res - - def __eq__(self, other): - """ return true if path and rev attributes each match """ - return (str(self) == str(other) and - (self.rev == other.rev or self.rev == other.rev)) - - def __ne__(self, other): - return not self == other - - def join(self, *args): - """ return a new Path (with the same revision) which is composed - of the self Path followed by 'args' path components. - """ - if not args: - return self - - args = tuple([arg.strip(self.sep) for arg in args]) - parts = (self.strpath, ) + args - newpath = self.__class__(self.sep.join(parts), self.rev, self.auth) - return newpath - - def propget(self, name): - """ return the content of the given property. """ - value = self._propget(name) - return value - - def proplist(self): - """ list all property names. """ - content = self._proplist() - return content - - def info(self): - """ return an Info structure with svn-provided information. """ - parent = self.dirpath() - nameinfo_seq = parent._listdir_nameinfo() - bn = self.basename - for name, info in nameinfo_seq: - if name == bn: - return info - raise py.error.ENOENT(self) - - def size(self): - """ Return the size of the file content of the Path. """ - return self.info().size - - def mtime(self): - """ Return the last modification time of the file. """ - return self.info().mtime - - # shared help methods - - def _escape(self, cmd): - return _escape_helper(cmd) - - - #def _childmaxrev(self): - # """ return maximum revision number of childs (or self.rev if no childs) """ - # rev = self.rev - # for name, info in self._listdir_nameinfo(): - # rev = max(rev, info.created_rev) - # return rev - - #def _getlatestrevision(self): - # """ return latest repo-revision for this path. """ - # url = self.strpath - # path = self.__class__(url, None) - # - # # we need a long walk to find the root-repo and revision - # while 1: - # try: - # rev = max(rev, path._childmaxrev()) - # previous = path - # path = path.dirpath() - # except (IOError, process.cmdexec.Error): - # break - # if rev is None: - # raise IOError, "could not determine newest repo revision for %s" % self - # return rev - - class Checkers(common.Checkers): - def dir(self): - try: - return self.path.info().kind == 'dir' - except py.error.Error: - return self._listdirworks() - - def _listdirworks(self): - try: - self.path.listdir() - except py.error.ENOENT: - return False - else: - return True - - def file(self): - try: - return self.path.info().kind == 'file' - except py.error.ENOENT: - return False - - def exists(self): - try: - return self.path.info() - except py.error.ENOENT: - return self._listdirworks() - -def parse_apr_time(timestr): - i = timestr.rfind('.') - if i == -1: - raise ValueError("could not parse %s" % timestr) - timestr = timestr[:i] - parsedtime = time.strptime(timestr, "%Y-%m-%dT%H:%M:%S") - return time.mktime(parsedtime) - -class PropListDict(dict): - """ a Dictionary which fetches values (InfoSvnCommand instances) lazily""" - def __init__(self, path, keynames): - dict.__init__(self, [(x, None) for x in keynames]) - self.path = path - - def __getitem__(self, key): - value = dict.__getitem__(self, key) - if value is None: - value = self.path.propget(key) - dict.__setitem__(self, key, value) - return value - -def fixlocale(): - if sys.platform != 'win32': - return 'LC_ALL=C ' - return '' - -# some nasty chunk of code to solve path and url conversion and quoting issues -ILLEGAL_CHARS = '* | \ / : < > ? \t \n \x0b \x0c \r'.split(' ') -if os.sep in ILLEGAL_CHARS: - ILLEGAL_CHARS.remove(os.sep) -ISWINDOWS = sys.platform == 'win32' -_reg_allow_disk = re.compile(r'^([a-z]\:\\)?[^:]+$', re.I) -def _check_path(path): - illegal = ILLEGAL_CHARS[:] - sp = path.strpath - if ISWINDOWS: - illegal.remove(':') - if not _reg_allow_disk.match(sp): - raise ValueError('path may not contain a colon (:)') - for char in sp: - if char not in string.printable or char in illegal: - raise ValueError('illegal character %r in path' % (char,)) - -def path_to_fspath(path, addat=True): - _check_path(path) - sp = path.strpath - if addat and path.rev != -1: - sp = '%s@%s' % (sp, path.rev) - elif addat: - sp = '%s at HEAD' % (sp,) - return sp - -def url_from_path(path): - fspath = path_to_fspath(path, False) - quote = py.std.urllib.quote - if ISWINDOWS: - match = _reg_allow_disk.match(fspath) - fspath = fspath.replace('\\', '/') - if match.group(1): - fspath = '/%s%s' % (match.group(1).replace('\\', '/'), - quote(fspath[len(match.group(1)):])) - else: - fspath = quote(fspath) - else: - fspath = quote(fspath) - if path.rev != -1: - fspath = '%s@%s' % (fspath, path.rev) - else: - fspath = '%s at HEAD' % (fspath,) - return 'file://%s' % (fspath,) - -class SvnAuth(object): - """ container for auth information for Subversion """ - def __init__(self, username, password, cache_auth=True, interactive=True): - self.username = username - self.password = password - self.cache_auth = cache_auth - self.interactive = interactive - - def makecmdoptions(self): - uname = self.username.replace('"', '\\"') - passwd = self.password.replace('"', '\\"') - ret = [] - if uname: - ret.append('--username="%s"' % (uname,)) - if passwd: - ret.append('--password="%s"' % (passwd,)) - if not self.cache_auth: - ret.append('--no-auth-cache') - if not self.interactive: - ret.append('--non-interactive') - return ' '.join(ret) - - def __str__(self): - return "" %(self.username,) - -rex_blame = re.compile(r'\s*(\d+)\s*(\S+) (.*)') - -class SvnWCCommandPath(common.PathBase): - """ path implementation offering access/modification to svn working copies. - It has methods similar to the functions in os.path and similar to the - commands of the svn client. - """ - sep = os.sep - - def __new__(cls, wcpath=None, auth=None): - self = object.__new__(cls) - if isinstance(wcpath, cls): - if wcpath.__class__ == cls: - return wcpath - wcpath = wcpath.localpath - if _check_for_bad_chars(str(wcpath), - ALLOWED_CHARS): - raise ValueError("bad char in wcpath %s" % (wcpath, )) - self.localpath = py.path.local(wcpath) - self.auth = auth - return self - - strpath = property(lambda x: str(x.localpath), None, None, "string path") - - def __eq__(self, other): - return self.localpath == getattr(other, 'localpath', None) - - def _geturl(self): - if getattr(self, '_url', None) is None: - info = self.info() - self._url = info.url #SvnPath(info.url, info.rev) - assert isinstance(self._url, py.builtin._basestring) - return self._url - - url = property(_geturl, None, None, "url of this WC item") - - def _escape(self, cmd): - return _escape_helper(cmd) - - def dump(self, obj): - """ pickle object into path location""" - return self.localpath.dump(obj) - - def svnurl(self): - """ return current SvnPath for this WC-item. """ - info = self.info() - return py.path.svnurl(info.url) - - def __repr__(self): - return "svnwc(%r)" % (self.strpath) # , self._url) - - def __str__(self): - return str(self.localpath) - - def _makeauthoptions(self): - if self.auth is None: - return '' - return self.auth.makecmdoptions() - - def _authsvn(self, cmd, args=None): - args = args and list(args) or [] - args.append(self._makeauthoptions()) - return self._svn(cmd, *args) - - def _svn(self, cmd, *args): - l = ['svn %s' % cmd] - args = [self._escape(item) for item in args] - l.extend(args) - l.append('"%s"' % self._escape(self.strpath)) - # try fixing the locale because we can't otherwise parse - string = fixlocale() + " ".join(l) - try: - try: - key = 'LC_MESSAGES' - hold = os.environ.get(key) - os.environ[key] = 'C' - out = py.process.cmdexec(string) - finally: - if hold: - os.environ[key] = hold - else: - del os.environ[key] - except py.process.cmdexec.Error: - e = sys.exc_info()[1] - strerr = e.err.lower() - if strerr.find('file not found') != -1: - raise py.error.ENOENT(self) - if (strerr.find('file exists') != -1 or - strerr.find('file already exists') != -1 or - strerr.find("can't create directory") != -1): - raise py.error.EEXIST(self) - raise - return out - - def switch(self, url): - """ switch to given URL. """ - self._authsvn('switch', [url]) - - def checkout(self, url=None, rev=None): - """ checkout from url to local wcpath. """ - args = [] - if url is None: - url = self.url - if rev is None or rev == -1: - if (py.std.sys.platform != 'win32' and - _getsvnversion() == '1.3'): - url += "@HEAD" - else: - if _getsvnversion() == '1.3': - url += "@%d" % rev - else: - args.append('-r' + str(rev)) - args.append(url) - self._authsvn('co', args) - - def update(self, rev='HEAD'): - """ update working copy item to given revision. (None -> HEAD). """ - self._authsvn('up', ['-r', rev, "--non-interactive"],) - - def write(self, content, mode='w'): - """ write content into local filesystem wc. """ - self.localpath.write(content, mode) - - def dirpath(self, *args): - """ return the directory Path of the current Path. """ - return self.__class__(self.localpath.dirpath(*args), auth=self.auth) - - def _ensuredirs(self): - parent = self.dirpath() - if parent.check(dir=0): - parent._ensuredirs() - if self.check(dir=0): - self.mkdir() - return self - - def ensure(self, *args, **kwargs): - """ ensure that an args-joined path exists (by default as - a file). if you specify a keyword argument 'directory=True' - then the path is forced to be a directory path. - """ - p = self.join(*args) - if p.check(): - if p.check(versioned=False): - p.add() - return p - if kwargs.get('dir', 0): - return p._ensuredirs() - parent = p.dirpath() - parent._ensuredirs() - p.write("") - p.add() - return p - - def mkdir(self, *args): - """ create & return the directory joined with args. """ - if args: - return self.join(*args).mkdir() - else: - self._svn('mkdir') - return self - - def add(self): - """ add ourself to svn """ - self._svn('add') - - def remove(self, rec=1, force=1): - """ remove a file or a directory tree. 'rec'ursive is - ignored and considered always true (because of - underlying svn semantics. - """ - assert rec, "svn cannot remove non-recursively" - if not self.check(versioned=True): - # not added to svn (anymore?), just remove - py.path.local(self).remove() - return - flags = [] - if force: - flags.append('--force') - self._svn('remove', *flags) - - def copy(self, target): - """ copy path to target.""" - py.process.cmdexec("svn copy %s %s" %(str(self), str(target))) - - def rename(self, target): - """ rename this path to target. """ - py.process.cmdexec("svn move --force %s %s" %(str(self), str(target))) - - def lock(self): - """ set a lock (exclusive) on the resource """ - out = self._authsvn('lock').strip() - if not out: - # warning or error, raise exception - raise Exception(out[4:]) - - def unlock(self): - """ unset a previously set lock """ - out = self._authsvn('unlock').strip() - if out.startswith('svn:'): - # warning or error, raise exception - raise Exception(out[4:]) - - def cleanup(self): - """ remove any locks from the resource """ - # XXX should be fixed properly!!! - try: - self.unlock() - except: - pass - - def status(self, updates=0, rec=0, externals=0): - """ return (collective) Status object for this file. """ - # http://svnbook.red-bean.com/book.html#svn-ch-3-sect-4.3.1 - # 2201 2192 jum test - # XXX - if externals: - raise ValueError("XXX cannot perform status() " - "on external items yet") - else: - #1.2 supports: externals = '--ignore-externals' - externals = '' - if rec: - rec= '' - else: - rec = '--non-recursive' - - # XXX does not work on all subversion versions - #if not externals: - # externals = '--ignore-externals' - - if updates: - updates = '-u' - else: - updates = '' - - try: - cmd = 'status -v --xml --no-ignore %s %s %s' % ( - updates, rec, externals) - out = self._authsvn(cmd) - except py.process.cmdexec.Error: - cmd = 'status -v --no-ignore %s %s %s' % ( - updates, rec, externals) - out = self._authsvn(cmd) - rootstatus = WCStatus(self).fromstring(out, self) - else: - rootstatus = XMLWCStatus(self).fromstring(out, self) - return rootstatus - - def diff(self, rev=None): - """ return a diff of the current path against revision rev (defaulting - to the last one). - """ - args = [] - if rev is not None: - args.append("-r %d" % rev) - out = self._authsvn('diff', args) - return out - - def blame(self): - """ return a list of tuples of three elements: - (revision, commiter, line) - """ - out = self._svn('blame') - result = [] - blamelines = out.splitlines() - reallines = py.path.svnurl(self.url).readlines() - for i, (blameline, line) in enumerate( - zip(blamelines, reallines)): - m = rex_blame.match(blameline) - if not m: - raise ValueError("output line %r of svn blame does not match " - "expected format" % (line, )) - rev, name, _ = m.groups() - result.append((int(rev), name, line)) - return result - - _rex_commit = re.compile(r'.*Committed revision (\d+)\.$', re.DOTALL) - def commit(self, msg='', rec=1): - """ commit with support for non-recursive commits """ - # XXX i guess escaping should be done better here?!? - cmd = 'commit -m "%s" --force-log' % (msg.replace('"', '\\"'),) - if not rec: - cmd += ' -N' - out = self._authsvn(cmd) - try: - del cache.info[self] - except KeyError: - pass - if out: - m = self._rex_commit.match(out) - return int(m.group(1)) - - def propset(self, name, value, *args): - """ set property name to value on this path. """ - d = py.path.local.mkdtemp() - try: - p = d.join('value') - p.write(value) - self._svn('propset', name, '--file', str(p), *args) - finally: - d.remove() - - def propget(self, name): - """ get property name on this path. """ - res = self._svn('propget', name) - return res[:-1] # strip trailing newline - - def propdel(self, name): - """ delete property name on this path. """ - res = self._svn('propdel', name) - return res[:-1] # strip trailing newline - - def proplist(self, rec=0): - """ return a mapping of property names to property values. -If rec is True, then return a dictionary mapping sub-paths to such mappings. -""" - if rec: - res = self._svn('proplist -R') - return make_recursive_propdict(self, res) - else: - res = self._svn('proplist') - lines = res.split('\n') - lines = [x.strip() for x in lines[1:]] - return PropListDict(self, lines) - - def revert(self, rec=0): - """ revert the local changes of this path. if rec is True, do so -recursively. """ - if rec: - result = self._svn('revert -R') - else: - result = self._svn('revert') - return result - - def new(self, **kw): - """ create a modified version of this path. A 'rev' argument - indicates a new revision. - the following keyword arguments modify various path parts: - - http://host.com/repo/path/file.ext - |-----------------------| dirname - |------| basename - |--| purebasename - |--| ext - """ - if kw: - localpath = self.localpath.new(**kw) - else: - localpath = self.localpath - return self.__class__(localpath, auth=self.auth) - - def join(self, *args, **kwargs): - """ return a new Path (with the same revision) which is composed - of the self Path followed by 'args' path components. - """ - if not args: - return self - localpath = self.localpath.join(*args, **kwargs) - return self.__class__(localpath, auth=self.auth) - - def info(self, usecache=1): - """ return an Info structure with svn-provided information. """ - info = usecache and cache.info.get(self) - if not info: - try: - output = self._svn('info') - except py.process.cmdexec.Error: - e = sys.exc_info()[1] - if e.err.find('Path is not a working copy directory') != -1: - raise py.error.ENOENT(self, e.err) - elif e.err.find("is not under version control") != -1: - raise py.error.ENOENT(self, e.err) - raise - # XXX SVN 1.3 has output on stderr instead of stdout (while it does - # return 0!), so a bit nasty, but we assume no output is output - # to stderr... - if (output.strip() == '' or - output.lower().find('not a versioned resource') != -1): - raise py.error.ENOENT(self, output) - info = InfoSvnWCCommand(output) - - # Can't reliably compare on Windows without access to win32api - if py.std.sys.platform != 'win32': - if info.path != self.localpath: - raise py.error.ENOENT(self, "not a versioned resource:" + - " %s != %s" % (info.path, self.localpath)) - cache.info[self] = info - self.rev = info.rev - return info - - def listdir(self, fil=None, sort=None): - """ return a sequence of Paths. - - listdir will return either a tuple or a list of paths - depending on implementation choices. - """ - if isinstance(fil, str): - fil = common.FNMatcher(fil) - # XXX unify argument naming with LocalPath.listdir - def notsvn(path): - return path.basename != '.svn' - - paths = [self.__class__(p, auth=self.auth) - for p in self.localpath.listdir() - if notsvn(p) and (not fil or fil(p))] - self._sortlist(paths, sort) - return paths - - def open(self, mode='r'): - """ return an opened file with the given mode. """ - return open(self.strpath, mode) - - def _getbyspec(self, spec): - return self.localpath._getbyspec(spec) - - class Checkers(py.path.local.Checkers): - def __init__(self, path): - self.svnwcpath = path - self.path = path.localpath - def versioned(self): - try: - s = self.svnwcpath.info() - except (py.error.ENOENT, py.error.EEXIST): - return False - except py.process.cmdexec.Error: - e = sys.exc_info()[1] - if e.err.find('is not a working copy')!=-1: - return False - if e.err.lower().find('not a versioned resource') != -1: - return False - raise - else: - return True - - def log(self, rev_start=None, rev_end=1, verbose=False): - """ return a list of LogEntry instances for this path. -rev_start is the starting revision (defaulting to the first one). -rev_end is the last revision (defaulting to HEAD). -if verbose is True, then the LogEntry instances also know which files changed. -""" - assert self.check() # make it simpler for the pipe - rev_start = rev_start is None and "HEAD" or rev_start - rev_end = rev_end is None and "HEAD" or rev_end - if rev_start == "HEAD" and rev_end == 1: - rev_opt = "" - else: - rev_opt = "-r %s:%s" % (rev_start, rev_end) - verbose_opt = verbose and "-v" or "" - locale_env = fixlocale() - # some blather on stderr - auth_opt = self._makeauthoptions() - #stdin, stdout, stderr = os.popen3(locale_env + - # 'svn log --xml %s %s %s "%s"' % ( - # rev_opt, verbose_opt, auth_opt, - # self.strpath)) - cmd = locale_env + 'svn log --xml %s %s %s "%s"' % ( - rev_opt, verbose_opt, auth_opt, self.strpath) - - popen = subprocess.Popen(cmd, - stdout=subprocess.PIPE, - stderr=subprocess.PIPE, - shell=True, - ) - stdout, stderr = popen.communicate() - stdout = py.builtin._totext(stdout, sys.getdefaultencoding()) - minidom,ExpatError = importxml() - try: - tree = minidom.parseString(stdout) - except ExpatError: - raise ValueError('no such revision') - result = [] - for logentry in filter(None, tree.firstChild.childNodes): - if logentry.nodeType == logentry.ELEMENT_NODE: - result.append(LogEntry(logentry)) - return result - - def size(self): - """ Return the size of the file content of the Path. """ - return self.info().size - - def mtime(self): - """ Return the last modification time of the file. """ - return self.info().mtime - - def __hash__(self): - return hash((self.strpath, self.__class__, self.auth)) - - -class WCStatus: - attrnames = ('modified','added', 'conflict', 'unchanged', 'external', - 'deleted', 'prop_modified', 'unknown', 'update_available', - 'incomplete', 'kindmismatch', 'ignored', 'locked', 'replaced' - ) - - def __init__(self, wcpath, rev=None, modrev=None, author=None): - self.wcpath = wcpath - self.rev = rev - self.modrev = modrev - self.author = author - - for name in self.attrnames: - setattr(self, name, []) - - def allpath(self, sort=True, **kw): - d = {} - for name in self.attrnames: - if name not in kw or kw[name]: - for path in getattr(self, name): - d[path] = 1 - l = d.keys() - if sort: - l.sort() - return l - - # XXX a bit scary to assume there's always 2 spaces between username and - # path, however with win32 allowing spaces in user names there doesn't - # seem to be a more solid approach :( - _rex_status = re.compile(r'\s+(\d+|-)\s+(\S+)\s+(.+?)\s{2,}(.*)') - - def fromstring(data, rootwcpath, rev=None, modrev=None, author=None): - """ return a new WCStatus object from data 's' - """ - rootstatus = WCStatus(rootwcpath, rev, modrev, author) - update_rev = None - for line in data.split('\n'): - if not line.strip(): - continue - #print "processing %r" % line - flags, rest = line[:8], line[8:] - # first column - c0,c1,c2,c3,c4,c5,x6,c7 = flags - #if '*' in line: - # print "flags", repr(flags), "rest", repr(rest) - - if c0 in '?XI': - fn = line.split(None, 1)[1] - if c0 == '?': - wcpath = rootwcpath.join(fn, abs=1) - rootstatus.unknown.append(wcpath) - elif c0 == 'X': - wcpath = rootwcpath.__class__( - rootwcpath.localpath.join(fn, abs=1), - auth=rootwcpath.auth) - rootstatus.external.append(wcpath) - elif c0 == 'I': - wcpath = rootwcpath.join(fn, abs=1) - rootstatus.ignored.append(wcpath) - - continue - - #elif c0 in '~!' or c4 == 'S': - # raise NotImplementedError("received flag %r" % c0) - - m = WCStatus._rex_status.match(rest) - if not m: - if c7 == '*': - fn = rest.strip() - wcpath = rootwcpath.join(fn, abs=1) - rootstatus.update_available.append(wcpath) - continue - if line.lower().find('against revision:')!=-1: - update_rev = int(rest.split(':')[1].strip()) - continue - if line.lower().find('status on external') > -1: - # XXX not sure what to do here... perhaps we want to - # store some state instead of just continuing, as right - # now it makes the top-level external get added twice - # (once as external, once as 'normal' unchanged item) - # because of the way SVN presents external items - continue - # keep trying - raise ValueError("could not parse line %r" % line) - else: - rev, modrev, author, fn = m.groups() - wcpath = rootwcpath.join(fn, abs=1) - #assert wcpath.check() - if c0 == 'M': - assert wcpath.check(file=1), "didn't expect a directory with changed content here" - rootstatus.modified.append(wcpath) - elif c0 == 'A' or c3 == '+' : - rootstatus.added.append(wcpath) - elif c0 == 'D': - rootstatus.deleted.append(wcpath) - elif c0 == 'C': - rootstatus.conflict.append(wcpath) - elif c0 == '~': - rootstatus.kindmismatch.append(wcpath) - elif c0 == '!': - rootstatus.incomplete.append(wcpath) - elif c0 == 'R': - rootstatus.replaced.append(wcpath) - elif not c0.strip(): - rootstatus.unchanged.append(wcpath) - else: - raise NotImplementedError("received flag %r" % c0) - - if c1 == 'M': - rootstatus.prop_modified.append(wcpath) - # XXX do we cover all client versions here? - if c2 == 'L' or c5 == 'K': - rootstatus.locked.append(wcpath) - if c7 == '*': - rootstatus.update_available.append(wcpath) - - if wcpath == rootwcpath: - rootstatus.rev = rev - rootstatus.modrev = modrev - rootstatus.author = author - if update_rev: - rootstatus.update_rev = update_rev - continue - return rootstatus - fromstring = staticmethod(fromstring) - -class XMLWCStatus(WCStatus): - def fromstring(data, rootwcpath, rev=None, modrev=None, author=None): - """ parse 'data' (XML string as outputted by svn st) into a status obj - """ - # XXX for externals, the path is shown twice: once - # with external information, and once with full info as if - # the item was a normal non-external... the current way of - # dealing with this issue is by ignoring it - this does make - # externals appear as external items as well as 'normal', - # unchanged ones in the status object so this is far from ideal - rootstatus = WCStatus(rootwcpath, rev, modrev, author) - update_rev = None - minidom, ExpatError = importxml() - try: - doc = minidom.parseString(data) - except ExpatError: - e = sys.exc_info()[1] - raise ValueError(str(e)) - urevels = doc.getElementsByTagName('against') - if urevels: - rootstatus.update_rev = urevels[-1].getAttribute('revision') - for entryel in doc.getElementsByTagName('entry'): - path = entryel.getAttribute('path') - statusel = entryel.getElementsByTagName('wc-status')[0] - itemstatus = statusel.getAttribute('item') - - if itemstatus == 'unversioned': - wcpath = rootwcpath.join(path, abs=1) - rootstatus.unknown.append(wcpath) - continue - elif itemstatus == 'external': - wcpath = rootwcpath.__class__( - rootwcpath.localpath.join(path, abs=1), - auth=rootwcpath.auth) - rootstatus.external.append(wcpath) - continue - elif itemstatus == 'ignored': - wcpath = rootwcpath.join(path, abs=1) - rootstatus.ignored.append(wcpath) - continue - elif itemstatus == 'incomplete': - wcpath = rootwcpath.join(path, abs=1) - rootstatus.incomplete.append(wcpath) - continue - - rev = statusel.getAttribute('revision') - if itemstatus == 'added' or itemstatus == 'none': - rev = '0' - modrev = '?' - author = '?' - date = '' - else: - #print entryel.toxml() - commitel = entryel.getElementsByTagName('commit')[0] - if commitel: - modrev = commitel.getAttribute('revision') - author = '' - author_els = commitel.getElementsByTagName('author') - if author_els: - for c in author_els[0].childNodes: - author += c.nodeValue - date = '' - for c in commitel.getElementsByTagName('date')[0]\ - .childNodes: - date += c.nodeValue - - wcpath = rootwcpath.join(path, abs=1) - - assert itemstatus != 'modified' or wcpath.check(file=1), ( - 'did\'t expect a directory with changed content here') - - itemattrname = { - 'normal': 'unchanged', - 'unversioned': 'unknown', - 'conflicted': 'conflict', - 'none': 'added', - }.get(itemstatus, itemstatus) - - attr = getattr(rootstatus, itemattrname) - attr.append(wcpath) - - propsstatus = statusel.getAttribute('props') - if propsstatus not in ('none', 'normal'): - rootstatus.prop_modified.append(wcpath) - - if wcpath == rootwcpath: - rootstatus.rev = rev - rootstatus.modrev = modrev - rootstatus.author = author - rootstatus.date = date - - # handle repos-status element (remote info) - rstatusels = entryel.getElementsByTagName('repos-status') - if rstatusels: - rstatusel = rstatusels[0] - ritemstatus = rstatusel.getAttribute('item') - if ritemstatus in ('added', 'modified'): - rootstatus.update_available.append(wcpath) - - lockels = entryel.getElementsByTagName('lock') - if len(lockels): - rootstatus.locked.append(wcpath) - - return rootstatus - fromstring = staticmethod(fromstring) - -class InfoSvnWCCommand: - def __init__(self, output): - # Path: test - # URL: http://codespeak.net/svn/std.path/trunk/dist/std.path/test - # Repository UUID: fd0d7bf2-dfb6-0310-8d31-b7ecfe96aada - # Revision: 2151 - # Node Kind: directory - # Schedule: normal - # Last Changed Author: hpk - # Last Changed Rev: 2100 - # Last Changed Date: 2003-10-27 20:43:14 +0100 (Mon, 27 Oct 2003) - # Properties Last Updated: 2003-11-03 14:47:48 +0100 (Mon, 03 Nov 2003) - - d = {} - for line in output.split('\n'): - if not line.strip(): - continue - key, value = line.split(':', 1) - key = key.lower().replace(' ', '') - value = value.strip() - d[key] = value - try: - self.url = d['url'] - except KeyError: - raise ValueError("Not a versioned resource") - #raise ValueError, "Not a versioned resource %r" % path - self.kind = d['nodekind'] == 'directory' and 'dir' or d['nodekind'] - self.rev = int(d['revision']) - self.path = py.path.local(d['path']) - self.size = self.path.size() - if 'lastchangedrev' in d: - self.created_rev = int(d['lastchangedrev']) - if 'lastchangedauthor' in d: - self.last_author = d['lastchangedauthor'] - if 'lastchangeddate' in d: - self.mtime = parse_wcinfotime(d['lastchangeddate']) - self.time = self.mtime * 1000000 - - def __eq__(self, other): - return self.__dict__ == other.__dict__ - -def parse_wcinfotime(timestr): - """ Returns seconds since epoch, UTC. """ - # example: 2003-10-27 20:43:14 +0100 (Mon, 27 Oct 2003) - m = re.match(r'(\d+-\d+-\d+ \d+:\d+:\d+) ([+-]\d+) .*', timestr) - if not m: - raise ValueError("timestring %r does not match" % timestr) - timestr, timezone = m.groups() - # do not handle timezone specially, return value should be UTC - parsedtime = time.strptime(timestr, "%Y-%m-%d %H:%M:%S") - return calendar.timegm(parsedtime) - -def make_recursive_propdict(wcroot, - output, - rex = re.compile("Properties on '(.*)':")): - """ Return a dictionary of path->PropListDict mappings. """ - lines = [x for x in output.split('\n') if x] - pdict = {} - while lines: - line = lines.pop(0) - m = rex.match(line) - if not m: - raise ValueError("could not parse propget-line: %r" % line) - path = m.groups()[0] - wcpath = wcroot.join(path, abs=1) - propnames = [] - while lines and lines[0].startswith(' '): - propname = lines.pop(0).strip() - propnames.append(propname) - assert propnames, "must have found properties!" - pdict[wcpath] = PropListDict(wcpath, propnames) - return pdict - - -def importxml(cache=[]): - if cache: - return cache - from xml.dom import minidom - from xml.parsers.expat import ExpatError - cache.extend([minidom, ExpatError]) - return cache - -class LogEntry: - def __init__(self, logentry): - self.rev = int(logentry.getAttribute('revision')) - for lpart in filter(None, logentry.childNodes): - if lpart.nodeType == lpart.ELEMENT_NODE: - if lpart.nodeName == 'author': - self.author = lpart.firstChild.nodeValue - elif lpart.nodeName == 'msg': - if lpart.firstChild: - self.msg = lpart.firstChild.nodeValue - else: - self.msg = '' - elif lpart.nodeName == 'date': - #2003-07-29T20:05:11.598637Z - timestr = lpart.firstChild.nodeValue - self.date = parse_apr_time(timestr) - elif lpart.nodeName == 'paths': - self.strpaths = [] - for ppart in filter(None, lpart.childNodes): - if ppart.nodeType == ppart.ELEMENT_NODE: - self.strpaths.append(PathEntry(ppart)) - def __repr__(self): - return '' % ( - self.rev, self.author, self.date) - - --- a/_py/code/assertion.py +++ /dev/null @@ -1,75 +0,0 @@ -import sys -import py - -BuiltinAssertionError = py.builtin.builtins.AssertionError - - -def _format_explanation(explanation): - # uck! See CallFunc for where \n{ and \n} escape sequences are used - raw_lines = (explanation or '').split('\n') - # escape newlines not followed by { and } - lines = [raw_lines[0]] - for l in raw_lines[1:]: - if l.startswith('{') or l.startswith('}'): - lines.append(l) - else: - lines[-1] += '\\n' + l - - result = lines[:1] - stack = [0] - stackcnt = [0] - for line in lines[1:]: - if line.startswith('{'): - if stackcnt[-1]: - s = 'and ' - else: - s = 'where ' - stack.append(len(result)) - stackcnt[-1] += 1 - stackcnt.append(0) - result.append(' +' + ' '*(len(stack)-1) + s + line[1:]) - else: - assert line.startswith('}') - stack.pop() - stackcnt.pop() - result[stack[-1]] += line[1:] - assert len(stack) == 1 - return '\n'.join(result) - - -if sys.version_info >= (2, 6) or (sys.platform.startswith("java")): - from _py.code._assertionnew import interpret -else: - from _py.code._assertionold import interpret - - -class AssertionError(BuiltinAssertionError): - - def __init__(self, *args): - BuiltinAssertionError.__init__(self, *args) - if args: - try: - self.msg = str(args[0]) - except (KeyboardInterrupt, SystemExit): - raise - except: - self.msg = "<[broken __repr__] %s at %0xd>" %( - args[0].__class__, id(args[0])) - else: - f = py.code.Frame(sys._getframe(1)) - try: - source = f.statement - source = str(source.deindent()).strip() - except py.error.ENOENT: - source = None - # this can also occur during reinterpretation, when the - # co_filename is set to "". - if source: - self.msg = interpret(source, f, should_fail=True) - if not self.args: - self.args = (self.msg,) - else: - self.msg = None - -if sys.version_info > (3, 0): - AssertionError.__module__ = "builtins" --- a/_py/log/warning.py +++ /dev/null @@ -1,72 +0,0 @@ -import py, sys - -class Warning(DeprecationWarning): - def __init__(self, msg, path, lineno): - self.msg = msg - self.path = path - self.lineno = lineno - def __repr__(self): - return "%s:%d: %s" %(self.path, self.lineno+1, self.msg) - def __str__(self): - return self.msg - -def _apiwarn(startversion, msg, stacklevel=2, function=None): - # below is mostly COPIED from python2.4/warnings.py's def warn() - # Get context information - if stacklevel == "initpkg": - frame = sys._getframe(stacklevel == "initpkg" and 1 or stacklevel) - level = 2 - while frame: - co = frame.f_code - if co.co_name == "__getattr__" and co.co_filename.find("initpkg") !=-1: - stacklevel = level - break - level += 1 - frame = frame.f_back - else: - stacklevel = 1 - msg = "%s (since version %s)" %(msg, startversion) - warn(msg, stacklevel=stacklevel+1, function=function) - -def warn(msg, stacklevel=1, function=None): - if function is not None: - filename = py.std.inspect.getfile(function) - lineno = py.code.getrawcode(function).co_firstlineno - else: - try: - caller = sys._getframe(stacklevel) - except ValueError: - globals = sys.__dict__ - lineno = 1 - else: - globals = caller.f_globals - lineno = caller.f_lineno - if '__name__' in globals: - module = globals['__name__'] - else: - module = "" - filename = globals.get('__file__') - if filename: - fnl = filename.lower() - if fnl.endswith(".pyc") or fnl.endswith(".pyo"): - filename = filename[:-1] - elif fnl.endswith("$py.class"): - filename = filename.replace('$py.class', '.py') - else: - if module == "__main__": - try: - filename = sys.argv[0] - except AttributeError: - # embedded interpreters don't have sys.argv, see bug #839151 - filename = '__main__' - if not filename: - filename = module - path = py.path.local(filename) - warning = Warning(msg, path, lineno) - py.std.warnings.warn_explicit(warning, category=Warning, - filename=str(warning.path), - lineno=warning.lineno, - registry=py.std.warnings.__dict__.setdefault( - "__warningsregistry__", {}) - ) - --- a/_py/path/gateway/__init__.py +++ /dev/null @@ -1,1 +0,0 @@ -# --- a/_py/io/__init__.py +++ /dev/null @@ -1,1 +0,0 @@ -""" input/output helping """ --- a/_py/path/common.py +++ /dev/null @@ -1,333 +0,0 @@ -""" -""" -import os, sys -import py - -class Checkers: - _depend_on_existence = 'exists', 'link', 'dir', 'file' - - def __init__(self, path): - self.path = path - - def dir(self): - raise NotImplementedError - - def file(self): - raise NotImplementedError - - def dotfile(self): - return self.path.basename.startswith('.') - - def ext(self, arg): - if not arg.startswith('.'): - arg = '.' + arg - return self.path.ext == arg - - def exists(self): - raise NotImplementedError - - def basename(self, arg): - return self.path.basename == arg - - def basestarts(self, arg): - return self.path.basename.startswith(arg) - - def relto(self, arg): - return self.path.relto(arg) - - def fnmatch(self, arg): - return FNMatcher(arg)(self.path) - - def endswith(self, arg): - return str(self.path).endswith(arg) - - def _evaluate(self, kw): - for name, value in kw.items(): - invert = False - meth = None - try: - meth = getattr(self, name) - except AttributeError: - if name[:3] == 'not': - invert = True - try: - meth = getattr(self, name[3:]) - except AttributeError: - pass - if meth is None: - raise TypeError( - "no %r checker available for %r" % (name, self.path)) - try: - if py.code.getrawcode(meth).co_argcount > 1: - if (not meth(value)) ^ invert: - return False - else: - if bool(value) ^ bool(meth()) ^ invert: - return False - except (py.error.ENOENT, py.error.ENOTDIR): - for name in self._depend_on_existence: - if name in kw: - if kw.get(name): - return False - name = 'not' + name - if name in kw: - if not kw.get(name): - return False - return True - -class NeverRaised(Exception): - pass - -class PathBase(object): - """ shared implementation for filesystem path objects.""" - Checkers = Checkers - - def __div__(self, other): - return self.join(str(other)) - __truediv__ = __div__ # py3k - - def basename(self): - """ basename part of path. """ - return self._getbyspec('basename')[0] - basename = property(basename, None, None, basename.__doc__) - - def purebasename(self): - """ pure base name of the path.""" - return self._getbyspec('purebasename')[0] - purebasename = property(purebasename, None, None, purebasename.__doc__) - - def ext(self): - """ extension of the path (including the '.').""" - return self._getbyspec('ext')[0] - ext = property(ext, None, None, ext.__doc__) - - def dirpath(self, *args, **kwargs): - """ return the directory Path of the current Path joined - with any given path arguments. - """ - return self.new(basename='').join(*args, **kwargs) - - def read(self, mode='r'): - """ read and return a bytestring from reading the path. """ - if sys.version_info < (2,3): - for x in 'u', 'U': - if x in mode: - mode = mode.replace(x, '') - f = self.open(mode) - try: - return f.read() - finally: - f.close() - - def readlines(self, cr=1): - """ read and return a list of lines from the path. if cr is False, the -newline will be removed from the end of each line. """ - if not cr: - content = self.read('rU') - return content.split('\n') - else: - f = self.open('rU') - try: - return f.readlines() - finally: - f.close() - - def load(self): - """ (deprecated) return object unpickled from self.read() """ - f = self.open('rb') - try: - return py.error.checked_call(py.std.pickle.load, f) - finally: - f.close() - - def move(self, target): - """ move this path to target. """ - if target.relto(self): - raise py.error.EINVAL(target, - "cannot move path into a subdirectory of itself") - try: - self.rename(target) - except py.error.EXDEV: # invalid cross-device link - self.copy(target) - self.remove() - - def __repr__(self): - """ return a string representation of this path. """ - return repr(str(self)) - - def check(self, **kw): - """ check a path for existence, or query its properties - - without arguments, this returns True if the path exists (on the - filesystem), False if not - - with (keyword only) arguments, the object compares the value - of the argument with the value of a property with the same name - (if it has one, else it raises a TypeError) - - when for example the keyword argument 'ext' is '.py', this will - return True if self.ext == '.py', False otherwise - """ - if not kw: - kw = {'exists' : 1} - return self.Checkers(self)._evaluate(kw) - - def relto(self, relpath): - """ return a string which is the relative part of the path - to the given 'relpath'. - """ - if not isinstance(relpath, (str, PathBase)): - raise TypeError("%r: not a string or path object" %(relpath,)) - strrelpath = str(relpath) - if strrelpath and strrelpath[-1] != self.sep: - strrelpath += self.sep - #assert strrelpath[-1] == self.sep - #assert strrelpath[-2] != self.sep - strself = str(self) - if sys.platform == "win32": - if os.path.normcase(strself).startswith( - os.path.normcase(strrelpath)): - return strself[len(strrelpath):] - elif strself.startswith(strrelpath): - return strself[len(strrelpath):] - return "" - - def bestrelpath(self, dest): - """ return a string which is a relative path from self - to dest such that self.join(bestrelpath) == dest and - if not such path can be determined return dest. - """ - try: - base = self.common(dest) - if not base: # can be the case on windows - return str(dest) - self2base = self.relto(base) - reldest = dest.relto(base) - if self2base: - n = self2base.count(self.sep) + 1 - else: - n = 0 - l = ['..'] * n - if reldest: - l.append(reldest) - target = dest.sep.join(l) - return target - except AttributeError: - return str(dest) - - - def parts(self, reverse=False): - """ return a root-first list of all ancestor directories - plus the path itself. - """ - current = self - l = [self] - while 1: - last = current - current = current.dirpath() - if last == current: - break - l.insert(0, current) - if reverse: - l.reverse() - return l - - def common(self, other): - """ return the common part shared with the other path - or None if there is no common part. - """ - last = None - for x, y in zip(self.parts(), other.parts()): - if x != y: - return last - last = x - return last - - def __add__(self, other): - """ return new path object with 'other' added to the basename""" - return self.new(basename=self.basename+str(other)) - - def __cmp__(self, other): - """ return sort value (-1, 0, +1). """ - try: - return cmp(self.strpath, other.strpath) - except AttributeError: - return cmp(str(self), str(other)) # self.path, other.path) - - def __lt__(self, other): - try: - return self.strpath < other.strpath - except AttributeError: - return str(self) < str(other) - - def visit(self, fil=None, rec=None, ignore=NeverRaised): - """ yields all paths below the current one - - fil is a filter (glob pattern or callable), if not matching the - path will not be yielded, defaulting to None (everything is - returned) - - rec is a filter (glob pattern or callable) that controls whether - a node is descended, defaulting to None - - ignore is an Exception class that is ignoredwhen calling dirlist() - on any of the paths (by default, all exceptions are reported) - """ - if isinstance(fil, str): - fil = FNMatcher(fil) - if rec: - if isinstance(rec, str): - rec = fnmatch(fil) - elif not hasattr(rec, '__call__'): - rec = None - try: - entries = self.listdir() - except ignore: - return - dirs = [p for p in entries - if p.check(dir=1) and (rec is None or rec(p))] - for subdir in dirs: - for p in subdir.visit(fil=fil, rec=rec, ignore=ignore): - yield p - for p in entries: - if fil is None or fil(p): - yield p - - def _sortlist(self, res, sort): - if sort: - if hasattr(sort, '__call__'): - res.sort(sort) - else: - res.sort() - - def samefile(self, other): - """ return True if other refers to the same stat object as self. """ - return self.strpath == str(other) - -class FNMatcher: - def __init__(self, pattern): - self.pattern = pattern - def __call__(self, path): - """return true if the basename/fullname matches the glob-'pattern'. - - * matches everything - ? matches any single character - [seq] matches any character in seq - [!seq] matches any char not in seq - - if the pattern contains a path-separator then the full path - is used for pattern matching and a '*' is prepended to the - pattern. - - if the pattern doesn't contain a path-separator the pattern - is only matched against the basename. - """ - pattern = self.pattern - if pattern.find(path.sep) == -1: - name = path.basename - else: - name = str(path) # path.strpath # XXX svn? - pattern = '*' + path.sep + pattern - from fnmatch import fnmatch - return fnmatch(name, pattern) - --- a/_py/std.py +++ /dev/null @@ -1,18 +0,0 @@ -import sys - -class Std(object): - """ makes top-level python modules available as an attribute, - importing them on first access. - """ - - def __init__(self): - self.__dict__ = sys.modules - - def __getattr__(self, name): - try: - m = __import__(name) - except ImportError: - raise AttributeError("py.std: could not import %s" % name) - return m - -std = Std() --- a/_py/test/cmdline.py +++ /dev/null @@ -1,23 +0,0 @@ -import py -import sys - -# -# main entry point -# - -def main(args=None): - if args is None: - args = sys.argv[1:] - config = py.test.config - try: - config.parse(args) - config.pluginmanager.do_configure(config) - session = config.initsession() - exitstatus = session.main() - config.pluginmanager.do_unconfigure(config) - raise SystemExit(exitstatus) - except config.Error: - e = sys.exc_info()[1] - sys.stderr.write("ERROR: %s\n" %(e.args[0],)) - raise SystemExit(3) - --- a/_py/compat/dep_optparse.py +++ /dev/null @@ -1,4 +0,0 @@ -import py -py.log._apiwarn("1.1", "py.compat.optparse deprecated, use standard library version.", stacklevel="initpkg") - -optparse = py.std.optparse --- a/_py/cmdline/pycountloc.py +++ /dev/null @@ -1,94 +0,0 @@ -#!/usr/bin/env python - -# hands on script to compute the non-empty Lines of Code -# for tests and non-test code - -"""\ -py.countloc [PATHS] - -Count (non-empty) lines of python code and number of python files recursively -starting from a list of paths given on the command line (starting from the -current working directory). Distinguish between test files and normal ones and -report them separately. -""" -import py - -def main(): - parser = py.std.optparse.OptionParser(usage=__doc__) - (options, args) = parser.parse_args() - countloc(args) - -def nodot(p): - return p.check(dotfile=0) - -class FileCounter(object): - def __init__(self): - self.file2numlines = {} - self.numlines = 0 - self.numfiles = 0 - - def addrecursive(self, directory, fil="*.py", rec=nodot): - for x in directory.visit(fil, rec): - self.addfile(x) - - def addfile(self, fn, emptylines=False): - if emptylines: - s = len(p.readlines()) - else: - s = 0 - for i in fn.readlines(): - if i.strip(): - s += 1 - self.file2numlines[fn] = s - self.numfiles += 1 - self.numlines += s - - def getnumlines(self, fil): - numlines = 0 - for path, value in self.file2numlines.items(): - if fil(path): - numlines += value - return numlines - - def getnumfiles(self, fil): - numfiles = 0 - for path in self.file2numlines: - if fil(path): - numfiles += 1 - return numfiles - -def get_loccount(locations=None): - if locations is None: - localtions = [py.path.local()] - counter = FileCounter() - for loc in locations: - counter.addrecursive(loc, '*.py', rec=nodot) - - def istestfile(p): - return p.check(fnmatch='test_*.py') - isnottestfile = lambda x: not istestfile(x) - - numfiles = counter.getnumfiles(isnottestfile) - numlines = counter.getnumlines(isnottestfile) - numtestfiles = counter.getnumfiles(istestfile) - numtestlines = counter.getnumlines(istestfile) - - return counter, numfiles, numlines, numtestfiles, numtestlines - -def countloc(paths=None): - if not paths: - paths = ['.'] - locations = [py.path.local(x) for x in paths] - (counter, numfiles, numlines, numtestfiles, - numtestlines) = get_loccount(locations) - - items = counter.file2numlines.items() - items.sort(lambda x,y: cmp(x[1], y[1])) - for x, y in items: - print("%3d %30s" % (y,x)) - - print("%30s %3d" %("number of testfiles", numtestfiles)) - print("%30s %3d" %("number of non-empty testlines", numtestlines)) - print("%30s %3d" %("number of files", numfiles)) - print("%30s %3d" %("number of non-empty lines", numlines)) - --- a/_py/code/oldmagic.py +++ /dev/null @@ -1,62 +0,0 @@ -""" deprecated module for turning on/off some features. """ - -import py - -from py.builtin import builtins as cpy_builtin - -def invoke(assertion=False, compile=False): - """ (deprecated) invoke magic, currently you can specify: - - assertion patches the builtin AssertionError to try to give - more meaningful AssertionErrors, which by means - of deploying a mini-interpreter constructs - a useful error message. - """ - py.log._apiwarn("1.1", - "py.magic.invoke() is deprecated, use py.code.patch_builtins()", - stacklevel=2, - ) - py.code.patch_builtins(assertion=assertion, compile=compile) - -def revoke(assertion=False, compile=False): - """ (deprecated) revoke previously invoked magic (see invoke()).""" - py.log._apiwarn("1.1", - "py.magic.revoke() is deprecated, use py.code.unpatch_builtins()", - stacklevel=2, - ) - py.code.unpatch_builtins(assertion=assertion, compile=compile) - -patched = {} - -def patch(namespace, name, value): - """ (deprecated) rebind the 'name' on the 'namespace' to the 'value', - possibly and remember the original value. Multiple - invocations to the same namespace/name pair will - remember a list of old values. - """ - py.log._apiwarn("1.1", - "py.magic.patch() is deprecated, in tests use monkeypatch funcarg.", - stacklevel=2, - ) - nref = (namespace, name) - orig = getattr(namespace, name) - patched.setdefault(nref, []).append(orig) - setattr(namespace, name, value) - return orig - -def revert(namespace, name): - """ (deprecated) revert to the orginal value the last patch modified. - Raise ValueError if no such original value exists. - """ - py.log._apiwarn("1.1", - "py.magic.revert() is deprecated, in tests use monkeypatch funcarg.", - stacklevel=2, - ) - nref = (namespace, name) - if nref not in patched or not patched[nref]: - raise ValueError("No original value stored for %s.%s" % nref) - current = getattr(namespace, name) - orig = patched[nref].pop() - setattr(namespace, name, orig) - return current - --- a/_py/code/__init__.py +++ /dev/null @@ -1,1 +0,0 @@ -""" python inspection/code generation API """ --- a/_py/cmdline/pyconvert_unittest.py +++ /dev/null @@ -1,249 +0,0 @@ -import re -import sys -import parser - -d={} -# d is the dictionary of unittest changes, keyed to the old name -# used by unittest. -# d[old][0] is the new replacement function. -# d[old][1] is the operator you will substitute, or '' if there is none. -# d[old][2] is the possible number of arguments to the unittest -# function. - -# Old Unittest Name new name operator # of args -d['assertRaises'] = ('raises', '', ['Any']) -d['fail'] = ('raise AssertionError', '', [0,1]) -d['assert_'] = ('assert', '', [1,2]) -d['failIf'] = ('assert not', '', [1,2]) -d['assertEqual'] = ('assert', ' ==', [2,3]) -d['failIfEqual'] = ('assert not', ' ==', [2,3]) -d['assertIn'] = ('assert', ' in', [2,3]) -d['assertNotIn'] = ('assert', ' not in', [2,3]) -d['assertNotEqual'] = ('assert', ' !=', [2,3]) -d['failUnlessEqual'] = ('assert', ' ==', [2,3]) -d['assertAlmostEqual'] = ('assert round', ' ==', [2,3,4]) -d['failIfAlmostEqual'] = ('assert not round', ' ==', [2,3,4]) -d['assertNotAlmostEqual'] = ('assert round', ' !=', [2,3,4]) -d['failUnlessAlmostEquals'] = ('assert round', ' ==', [2,3,4]) - -# the list of synonyms -d['failUnlessRaises'] = d['assertRaises'] -d['failUnless'] = d['assert_'] -d['assertEquals'] = d['assertEqual'] -d['assertNotEquals'] = d['assertNotEqual'] -d['assertAlmostEquals'] = d['assertAlmostEqual'] -d['assertNotAlmostEquals'] = d['assertNotAlmostEqual'] - -# set up the regular expressions we will need -leading_spaces = re.compile(r'^(\s*)') # this never fails - -pat = '' -for k in d.keys(): # this complicated pattern to match all unittests - pat += '|' + r'^(\s*)' + 'self.' + k + r'\(' # \tself.whatever( - -old_names = re.compile(pat[1:]) -linesep='\n' # nobody will really try to convert files not read - # in text mode, will they? - - -def blocksplitter(fp): - '''split a file into blocks that are headed by functions to rename''' - - blocklist = [] - blockstring = '' - - for line in fp: - interesting = old_names.match(line) - if interesting : - if blockstring: - blocklist.append(blockstring) - blockstring = line # reset the block - else: - blockstring += line - - blocklist.append(blockstring) - return blocklist - -def rewrite_utest(block): - '''rewrite every block to use the new utest functions''' - - '''returns the rewritten unittest, unless it ran into problems, - in which case it just returns the block unchanged. - ''' - utest = old_names.match(block) - - if not utest: - return block - - old = utest.group(0).lstrip()[5:-1] # the name we want to replace - new = d[old][0] # the name of the replacement function - op = d[old][1] # the operator you will use , or '' if there is none. - possible_args = d[old][2] # a list of the number of arguments the - # unittest function could possibly take. - - if possible_args == ['Any']: # just rename assertRaises & friends - return re.sub('self.'+old, new, block) - - message_pos = possible_args[-1] - # the remaining unittests can have an optional message to print - # when they fail. It is always the last argument to the function. - - try: - indent, argl, trailer = decompose_unittest(old, block) - - except SyntaxError: # but we couldn't parse it! - return block - - argnum = len(argl) - if argnum not in possible_args: - # sanity check - this one isn't real either - return block - - elif argnum == message_pos: - message = argl[-1] - argl = argl[:-1] - else: - message = None - - if argnum is 0 or (argnum is 1 and argnum is message_pos): #unittest fail() - string = '' - if message: - message = ' ' + message - - elif message_pos is 4: # assertAlmostEqual & friends - try: - pos = argl[2].lstrip() - except IndexError: - pos = '7' # default if none is specified - string = '(%s -%s, %s)%s 0' % (argl[0], argl[1], pos, op ) - - else: # assert_, assertEquals and all the rest - string = ' ' + op.join(argl) - - if message: - string = string + ',' + message - - return indent + new + string + trailer - -def decompose_unittest(old, block): - '''decompose the block into its component parts''' - - ''' returns indent, arglist, trailer - indent -- the indentation - arglist -- the arguments to the unittest function - trailer -- any extra junk after the closing paren, such as #commment - ''' - - indent = re.match(r'(\s*)', block).group() - pat = re.search('self.' + old + r'\(', block) - - args, trailer = get_expr(block[pat.end():], ')') - arglist = break_args(args, []) - - if arglist == ['']: # there weren't any - return indent, [], trailer - - for i in range(len(arglist)): - try: - parser.expr(arglist[i].lstrip('\t ')) - except SyntaxError: - if i == 0: - arglist[i] = '(' + arglist[i] + ')' - else: - arglist[i] = ' (' + arglist[i] + ')' - - return indent, arglist, trailer - -def break_args(args, arglist): - '''recursively break a string into a list of arguments''' - try: - first, rest = get_expr(args, ',') - if not rest: - return arglist + [first] - else: - return [first] + break_args(rest, arglist) - except SyntaxError: - return arglist + [args] - -def get_expr(s, char): - '''split a string into an expression, and the rest of the string''' - - pos=[] - for i in range(len(s)): - if s[i] == char: - pos.append(i) - if pos == []: - raise SyntaxError # we didn't find the expected char. Ick. - - for p in pos: - # make the python parser do the hard work of deciding which comma - # splits the string into two expressions - try: - parser.expr('(' + s[:p] + ')') - return s[:p], s[p+1:] - except SyntaxError: # It's not an expression yet - pass - raise SyntaxError # We never found anything that worked. - - -def main(): - import sys - import py - - usage = "usage: %prog [-s [filename ...] | [-i | -c filename ...]]" - optparser = py.std.optparse.OptionParser(usage) - - def select_output (option, opt, value, optparser, **kw): - if hasattr(optparser, 'output'): - optparser.error( - 'Cannot combine -s -i and -c options. Use one only.') - else: - optparser.output = kw['output'] - - optparser.add_option("-s", "--stdout", action="callback", - callback=select_output, - callback_kwargs={'output':'stdout'}, - help="send your output to stdout") - - optparser.add_option("-i", "--inplace", action="callback", - callback=select_output, - callback_kwargs={'output':'inplace'}, - help="overwrite files in place") - - optparser.add_option("-c", "--copy", action="callback", - callback=select_output, - callback_kwargs={'output':'copy'}, - help="copy files ... fn.py --> fn_cp.py") - - options, args = optparser.parse_args() - - output = getattr(optparser, 'output', 'stdout') - - if output in ['inplace', 'copy'] and not args: - optparser.error( - '-i and -c option require at least one filename') - - if not args: - s = '' - for block in blocksplitter(sys.stdin): - s += rewrite_utest(block) - sys.stdout.write(s) - - else: - for infilename in args: # no error checking to see if we can open, etc. - infile = file(infilename) - s = '' - for block in blocksplitter(infile): - s += rewrite_utest(block) - if output == 'inplace': - outfile = file(infilename, 'w+') - elif output == 'copy': # yes, just go clobber any existing .cp - outfile = file (infilename[:-3]+ '_cp.py', 'w+') - else: - outfile = sys.stdout - - outfile.write(s) - - -if __name__ == '__main__': - main() --- a/_py/code/_assertionnew.py +++ /dev/null @@ -1,337 +0,0 @@ -""" -Like _assertion.py but using builtin AST. It should replace _assertion.py -eventually. -""" - -import sys -import ast - -import py -from _py.code.assertion import _format_explanation, BuiltinAssertionError - - -if sys.platform.startswith("java") and sys.version_info < (2, 5, 2): - # See http://bugs.jython.org/issue1497 - _exprs = ("BoolOp", "BinOp", "UnaryOp", "Lambda", "IfExp", "Dict", - "ListComp", "GeneratorExp", "Yield", "Compare", "Call", - "Repr", "Num", "Str", "Attribute", "Subscript", "Name", - "List", "Tuple") - _stmts = ("FunctionDef", "ClassDef", "Return", "Delete", "Assign", - "AugAssign", "Print", "For", "While", "If", "With", "Raise", - "TryExcept", "TryFinally", "Assert", "Import", "ImportFrom", - "Exec", "Global", "Expr", "Pass", "Break", "Continue") - _expr_nodes = set(getattr(ast, name) for name in _exprs) - _stmt_nodes = set(getattr(ast, name) for name in _stmts) - def _is_ast_expr(node): - return node.__class__ in _expr_nodes - def _is_ast_stmt(node): - return node.__class__ in _stmt_nodes -else: - def _is_ast_expr(node): - return isinstance(node, ast.expr) - def _is_ast_stmt(node): - return isinstance(node, ast.stmt) - - -class Failure(Exception): - """Error found while interpreting AST.""" - - def __init__(self, explanation=""): - self.cause = sys.exc_info() - self.explanation = explanation - - -def interpret(source, frame, should_fail=False): - mod = ast.parse(source) - visitor = DebugInterpreter(frame) - try: - visitor.visit(mod) - except Failure: - failure = sys.exc_info()[1] - return getfailure(failure) - if should_fail: - return ("(assertion failed, but when it was re-run for " - "printing intermediate values, it did not fail. Suggestions: " - "compute assert expression before the assert or use --nomagic)") - -def run(offending_line, frame=None): - if frame is None: - frame = py.code.Frame(sys._getframe(1)) - return interpret(offending_line, frame) - -def getfailure(failure): - explanation = _format_explanation(failure.explanation) - value = failure.cause[1] - if str(value): - lines = explanation.splitlines() - if not lines: - lines.append("") - lines[0] += " << %s" % (value,) - explanation = "\n".join(lines) - text = "%s: %s" % (failure.cause[0].__name__, explanation) - if text.startswith("AssertionError: assert "): - text = text[16:] - return text - - -operator_map = { - ast.BitOr : "|", - ast.BitXor : "^", - ast.BitAnd : "&", - ast.LShift : "<<", - ast.RShift : ">>", - ast.Add : "+", - ast.Sub : "-", - ast.Mult : "*", - ast.Div : "/", - ast.FloorDiv : "//", - ast.Mod : "%", - ast.Eq : "==", - ast.NotEq : "!=", - ast.Lt : "<", - ast.LtE : "<=", - ast.Gt : ">", - ast.GtE : ">=", - ast.Is : "is", - ast.IsNot : "is not", - ast.In : "in", - ast.NotIn : "not in" -} - -unary_map = { - ast.Not : "not %s", - ast.Invert : "~%s", - ast.USub : "-%s", - ast.UAdd : "+%s" -} - - -class DebugInterpreter(ast.NodeVisitor): - """Interpret AST nodes to gleam useful debugging information.""" - - def __init__(self, frame): - self.frame = frame - - def generic_visit(self, node): - # Fallback when we don't have a special implementation. - if _is_ast_expr(node): - mod = ast.Expression(node) - co = self._compile(mod) - try: - result = self.frame.eval(co) - except Exception: - raise Failure() - explanation = self.frame.repr(result) - return explanation, result - elif _is_ast_stmt(node): - mod = ast.Module([node]) - co = self._compile(mod, "exec") - try: - self.frame.exec_(co) - except Exception: - raise Failure() - return None, None - else: - raise AssertionError("can't handle %s" %(node,)) - - def _compile(self, source, mode="eval"): - return compile(source, "", mode) - - def visit_Expr(self, expr): - return self.visit(expr.value) - - def visit_Module(self, mod): - for stmt in mod.body: - self.visit(stmt) - - def visit_Name(self, name): - explanation, result = self.generic_visit(name) - # See if the name is local. - source = "%r in locals() is not globals()" % (name.id,) - co = self._compile(source) - try: - local = self.frame.eval(co) - except Exception: - # have to assume it isn't - local = False - if not local: - return name.id, result - return explanation, result - - def visit_Compare(self, comp): - left = comp.left - left_explanation, left_result = self.visit(left) - got_result = False - for op, next_op in zip(comp.ops, comp.comparators): - if got_result and not result: - break - next_explanation, next_result = self.visit(next_op) - op_symbol = operator_map[op.__class__] - explanation = "%s %s %s" % (left_explanation, op_symbol, - next_explanation) - source = "__exprinfo_left %s __exprinfo_right" % (op_symbol,) - co = self._compile(source) - try: - result = self.frame.eval(co, __exprinfo_left=left_result, - __exprinfo_right=next_result) - except Exception: - raise Failure(explanation) - else: - got_result = True - left_explanation, left_result = next_explanation, next_result - return explanation, result - - def visit_BoolOp(self, boolop): - is_or = isinstance(boolop.op, ast.Or) - explanations = [] - for operand in boolop.values: - explanation, result = self.visit(operand) - explanations.append(explanation) - if result == is_or: - break - name = is_or and " or " or " and " - explanation = "(" + name.join(explanations) + ")" - return explanation, result - - def visit_UnaryOp(self, unary): - pattern = unary_map[unary.op.__class__] - operand_explanation, operand_result = self.visit(unary.operand) - explanation = pattern % (operand_explanation,) - co = self._compile(pattern % ("__exprinfo_expr",)) - try: - result = self.frame.eval(co, __exprinfo_expr=operand_result) - except Exception: - raise Failure(explanation) - return explanation, result - - def visit_BinOp(self, binop): - left_explanation, left_result = self.visit(binop.left) - right_explanation, right_result = self.visit(binop.right) - symbol = operator_map[binop.op.__class__] - explanation = "(%s %s %s)" % (left_explanation, symbol, - right_explanation) - source = "__exprinfo_left %s __exprinfo_right" % (symbol,) - co = self._compile(source) - try: - result = self.frame.eval(co, __exprinfo_left=left_result, - __exprinfo_right=right_result) - except Exception: - raise Failure(explanation) - return explanation, result - - def visit_Call(self, call): - func_explanation, func = self.visit(call.func) - arg_explanations = [] - ns = {"__exprinfo_func" : func} - arguments = [] - for arg in call.args: - arg_explanation, arg_result = self.visit(arg) - arg_name = "__exprinfo_%s" % (len(ns),) - ns[arg_name] = arg_result - arguments.append(arg_name) - arg_explanations.append(arg_explanation) - for keyword in call.keywords: - arg_explanation, arg_result = self.visit(keyword.value) - arg_name = "__exprinfo_%s" % (len(ns),) - ns[arg_name] = arg_result - keyword_source = "%s=%%s" % (keyword.id) - arguments.append(keyword_source % (arg_name,)) - arg_explanations.append(keyword_source % (arg_explanation,)) - if call.starargs: - arg_explanation, arg_result = self.visit(call.starargs) - arg_name = "__exprinfo_star" - ns[arg_name] = arg_result - arguments.append("*%s" % (arg_name,)) - arg_explanations.append("*%s" % (arg_explanation,)) - if call.kwargs: - arg_explanation, arg_result = self.visit(call.kwargs) - arg_name = "__exprinfo_kwds" - ns[arg_name] = arg_result - arguments.append("**%s" % (arg_name,)) - arg_explanations.append("**%s" % (arg_explanation,)) - args_explained = ", ".join(arg_explanations) - explanation = "%s(%s)" % (func_explanation, args_explained) - args = ", ".join(arguments) - source = "__exprinfo_func(%s)" % (args,) - co = self._compile(source) - try: - result = self.frame.eval(co, **ns) - except Exception: - raise Failure(explanation) - # Only show result explanation if it's not a builtin call or returns a - # bool. - if not isinstance(call.func, ast.Name) or \ - not self._is_builtin_name(call.func): - source = "isinstance(__exprinfo_value, bool)" - co = self._compile(source) - try: - is_bool = self.frame.eval(co, __exprinfo_value=result) - except Exception: - is_bool = False - if not is_bool: - pattern = "%s\n{%s = %s\n}" - rep = self.frame.repr(result) - explanation = pattern % (rep, rep, explanation) - return explanation, result - - def _is_builtin_name(self, name): - pattern = "%r not in globals() and %r not in locals()" - source = pattern % (name.id, name.id) - co = self._compile(source) - try: - return self.frame.eval(co) - except Exception: - return False - - def visit_Attribute(self, attr): - if not isinstance(attr.ctx, ast.Load): - return self.generic_visit(attr) - source_explanation, source_result = self.visit(attr.value) - explanation = "%s.%s" % (source_explanation, attr.attr) - source = "__exprinfo_expr.%s" % (attr.attr,) - co = self._compile(source) - try: - result = self.frame.eval(co, __exprinfo_expr=source_result) - except Exception: - raise Failure(explanation) - # Check if the attr is from an instance. - source = "%r in getattr(__exprinfo_expr, '__dict__', {})" - source = source % (attr.attr,) - co = self._compile(source) - try: - from_instance = self.frame.eval(co, __exprinfo_expr=source_result) - except Exception: - from_instance = True - if from_instance: - rep = self.frame.repr(result) - pattern = "%s\n{%s = %s\n}" - explanation = pattern % (rep, rep, explanation) - return explanation, result - - def visit_Assert(self, assrt): - test_explanation, test_result = self.visit(assrt.test) - if test_explanation.startswith("False\n{False =") and \ - test_explanation.endswith("\n"): - test_explanation = test_explanation[15:-2] - explanation = "assert %s" % (test_explanation,) - if not test_result: - try: - raise BuiltinAssertionError - except Exception: - raise Failure(explanation) - return explanation, test_result - - def visit_Assign(self, assign): - value_explanation, value_result = self.visit(assign.value) - explanation = "... = %s" % (value_explanation,) - name = ast.Name("__exprinfo_expr", ast.Load(), assign.value.lineno, - assign.value.col_offset) - new_assign = ast.Assign(assign.targets, name, assign.lineno, - assign.col_offset) - mod = ast.Module([new_assign]) - co = self._compile(mod, "exec") - try: - self.frame.exec_(co, __exprinfo_expr=value_result) - except Exception: - raise Failure(explanation) - return explanation, value_result --- a/_py/compat/dep_textwrap.py +++ /dev/null @@ -1,4 +0,0 @@ -import py - -py.log._apiwarn("1.1", "py.compat.textwrap deprecated, use standard library version.", stacklevel="initpkg") -textwrap = py.std.textwrap --- a/_py/log/log.py +++ /dev/null @@ -1,184 +0,0 @@ -""" -basic logging functionality based on a producer/consumer scheme. - -XXX implement this API: (maybe put it into slogger.py?) - - log = Logger( - info=py.log.STDOUT, - debug=py.log.STDOUT, - command=None) - log.info("hello", "world") - log.command("hello", "world") - - log = Logger(info=Logger(something=...), - debug=py.log.STDOUT, - command=None) -""" -import py, sys - -class Message(object): - def __init__(self, keywords, args): - self.keywords = keywords - self.args = args - - def content(self): - return " ".join(map(str, self.args)) - - def prefix(self): - return "[%s] " % (":".join(self.keywords)) - - def __str__(self): - return self.prefix() + self.content() - - -class Producer(object): - """ (deprecated) Log producer API which sends messages to be logged - to a 'consumer' object, which then prints them to stdout, - stderr, files, etc. Used extensively by PyPy-1.1. - """ - - Message = Message # to allow later customization - keywords2consumer = {} - - def __init__(self, keywords, keywordmapper=None, **kw): - if hasattr(keywords, 'split'): - keywords = tuple(keywords.split()) - self._keywords = keywords - if keywordmapper is None: - keywordmapper = default_keywordmapper - self._keywordmapper = keywordmapper - - def __repr__(self): - return "" % ":".join(self._keywords) - - def __getattr__(self, name): - if '_' in name: - raise AttributeError(name) - producer = self.__class__(self._keywords + (name,)) - setattr(self, name, producer) - return producer - - def __call__(self, *args): - """ write a message to the appropriate consumer(s) """ - func = self._keywordmapper.getconsumer(self._keywords) - if func is not None: - func(self.Message(self._keywords, args)) - -class KeywordMapper: - def __init__(self): - self.keywords2consumer = {} - - def getstate(self): - return self.keywords2consumer.copy() - def setstate(self, state): - self.keywords2consumer.clear() - self.keywords2consumer.update(state) - - def getconsumer(self, keywords): - """ return a consumer matching the given keywords. - - tries to find the most suitable consumer by walking, starting from - the back, the list of keywords, the first consumer matching a - keyword is returned (falling back to py.log.default) - """ - for i in range(len(keywords), 0, -1): - try: - return self.keywords2consumer[keywords[:i]] - except KeyError: - continue - return self.keywords2consumer.get('default', default_consumer) - - def setconsumer(self, keywords, consumer): - """ set a consumer for a set of keywords. """ - # normalize to tuples - if isinstance(keywords, str): - keywords = tuple(filter(None, keywords.split())) - elif hasattr(keywords, '_keywords'): - keywords = keywords._keywords - elif not isinstance(keywords, tuple): - raise TypeError("key %r is not a string or tuple" % (keywords,)) - if consumer is not None and not py.builtin.callable(consumer): - if not hasattr(consumer, 'write'): - raise TypeError( - "%r should be None, callable or file-like" % (consumer,)) - consumer = File(consumer) - self.keywords2consumer[keywords] = consumer - -def default_consumer(msg): - """ the default consumer, prints the message to stdout (using 'print') """ - sys.stderr.write(str(msg)+"\n") - -default_keywordmapper = KeywordMapper() - -def setconsumer(keywords, consumer): - default_keywordmapper.setconsumer(keywords, consumer) - -def setstate(state): - default_keywordmapper.setstate(state) -def getstate(): - return default_keywordmapper.getstate() - -# -# Consumers -# - -class File(object): - """ log consumer wrapping a file(-like) object """ - def __init__(self, f): - assert hasattr(f, 'write') - #assert isinstance(f, file) or not hasattr(f, 'open') - self._file = f - - def __call__(self, msg): - """ write a message to the log """ - self._file.write(str(msg) + "\n") - -class Path(object): - """ log consumer that opens and writes to a Path """ - def __init__(self, filename, append=False, - delayed_create=False, buffering=False): - self._append = append - self._filename = str(filename) - self._buffering = buffering - if not delayed_create: - self._openfile() - - def _openfile(self): - mode = self._append and 'a' or 'w' - f = open(self._filename, mode) - self._file = f - - def __call__(self, msg): - """ write a message to the log """ - if not hasattr(self, "_file"): - self._openfile() - self._file.write(str(msg) + "\n") - if not self._buffering: - self._file.flush() - -def STDOUT(msg): - """ consumer that writes to sys.stdout """ - sys.stdout.write(str(msg)+"\n") - -def STDERR(msg): - """ consumer that writes to sys.stderr """ - sys.stderr.write(str(msg)+"\n") - -class Syslog: - """ consumer that writes to the syslog daemon """ - - def __init__(self, priority = None): - if priority is None: - priority = self.LOG_INFO - self.priority = priority - - def __call__(self, msg): - """ write a message to the log """ - py.std.syslog.syslog(self.priority, str(msg)) - -for _prio in "EMERG ALERT CRIT ERR WARNING NOTICE INFO DEBUG".split(): - _prio = "LOG_" + _prio - try: - setattr(Syslog, _prio, getattr(py.std.syslog, _prio)) - except AttributeError: - pass --- a/_py/process/killproc.py +++ /dev/null @@ -1,23 +0,0 @@ -import py -import os, sys - -if sys.platform == "win32": - try: - import ctypes - except ImportError: - def dokill(pid): - py.process.cmdexec("taskkill /F /PID %d" %(pid,)) - else: - def dokill(pid): - PROCESS_TERMINATE = 1 - handle = ctypes.windll.kernel32.OpenProcess( - PROCESS_TERMINATE, False, pid) - ctypes.windll.kernel32.TerminateProcess(handle, -1) - ctypes.windll.kernel32.CloseHandle(handle) -else: - def dokill(pid): - os.kill(pid, 15) - -def kill(pid): - """ kill process by id. """ - dokill(pid) --- a/_py/process/cmdexec.py +++ /dev/null @@ -1,44 +0,0 @@ -""" - -""" - -import os, sys -import subprocess -import py -from subprocess import Popen, PIPE - -def cmdexec(cmd): - """ return output of executing 'cmd' in a separate process. - - raise cmdexec.ExecutionFailed exeception if the command failed. - the exception will provide an 'err' attribute containing - the error-output from the command. - """ - process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) - out, err = process.communicate() - out = py.builtin._totext(out, sys.getdefaultencoding()) - err = py.builtin._totext(err, sys.getdefaultencoding()) - status = process.poll() - if status: - raise ExecutionFailed(status, status, cmd, out, err) - return out - -class ExecutionFailed(py.error.Error): - def __init__(self, status, systemstatus, cmd, out, err): - Exception.__init__(self) - self.status = status - self.systemstatus = systemstatus - self.cmd = cmd - self.err = err - self.out = out - - def __str__(self): - return "ExecutionFailed: %d %s\n%s" %(self.status, self.cmd, self.err) - -# export the exception under the name 'py.process.cmdexec.Error' -cmdexec.Error = ExecutionFailed -try: - ExecutionFailed.__module__ = 'py.process.cmdexec' - ExecutionFailed.__name__ = 'Error' -except (AttributeError, TypeError): - pass --- a/_py/path/cacheutil.py +++ /dev/null @@ -1,111 +0,0 @@ -""" -This module contains multithread-safe cache implementations. - -All Caches have - - getorbuild(key, builder) - delentry(key) - -methods and allow configuration when instantiating the cache class. -""" -from time import time as gettime - -class BasicCache(object): - def __init__(self, maxentries=128): - self.maxentries = maxentries - self.prunenum = int(maxentries - maxentries/8) - self._dict = {} - - def _getentry(self, key): - return self._dict[key] - - def _putentry(self, key, entry): - self._prunelowestweight() - self._dict[key] = entry - - def delentry(self, key, raising=False): - try: - del self._dict[key] - except KeyError: - if raising: - raise - - def getorbuild(self, key, builder): - try: - entry = self._getentry(key) - except KeyError: - entry = self._build(key, builder) - self._putentry(key, entry) - return entry.value - - def _prunelowestweight(self): - """ prune out entries with lowest weight. """ - numentries = len(self._dict) - if numentries >= self.maxentries: - # evict according to entry's weight - items = [(entry.weight, key) - for key, entry in self._dict.items()] - items.sort() - index = numentries - self.prunenum - if index > 0: - for weight, key in items[:index]: - # in MT situations the element might be gone - self.delentry(key, raising=False) - -class BuildcostAccessCache(BasicCache): - """ A BuildTime/Access-counting cache implementation. - the weight of a value is computed as the product of - - num-accesses-of-a-value * time-to-build-the-value - - The values with the least such weights are evicted - if the cache maxentries threshold is superceded. - For implementation flexibility more than one object - might be evicted at a time. - """ - # time function to use for measuring build-times - - def _build(self, key, builder): - start = gettime() - val = builder() - end = gettime() - return WeightedCountingEntry(val, end-start) - - -class WeightedCountingEntry(object): - def __init__(self, value, oneweight): - self._value = value - self.weight = self._oneweight = oneweight - - def value(self): - self.weight += self._oneweight - return self._value - value = property(value) - -class AgingCache(BasicCache): - """ This cache prunes out cache entries that are too old. - """ - def __init__(self, maxentries=128, maxseconds=10.0): - super(AgingCache, self).__init__(maxentries) - self.maxseconds = maxseconds - - def _getentry(self, key): - entry = self._dict[key] - if entry.isexpired(): - self.delentry(key) - raise KeyError(key) - return entry - - def _build(self, key, builder): - val = builder() - entry = AgingEntry(val, gettime() + self.maxseconds) - return entry - -class AgingEntry(object): - def __init__(self, value, expirationtime): - self.value = value - self.weight = expirationtime - - def isexpired(self): - t = gettime() - return t >= self.weight --- a/_py/code/code.py +++ /dev/null @@ -1,767 +0,0 @@ -import py -import sys - -builtin_repr = repr - -repr = py.builtin._tryimport('repr', 'reprlib') - -class Code(object): - """ wrapper around Python code objects """ - def __init__(self, rawcode): - rawcode = py.code.getrawcode(rawcode) - self.raw = rawcode - try: - self.filename = rawcode.co_filename - self.firstlineno = rawcode.co_firstlineno - 1 - self.name = rawcode.co_name - except AttributeError: - raise TypeError("not a code object: %r" %(rawcode,)) - - def __eq__(self, other): - return self.raw == other.raw - - def __ne__(self, other): - return not self == other - - def new(self, rec=False, **kwargs): - """ return new code object with modified attributes. - if rec-cursive is true then dive into code - objects contained in co_consts. - """ - if sys.platform.startswith("java"): - # XXX jython does not support the below co_filename hack - return self.raw - names = [x for x in dir(self.raw) if x[:3] == 'co_'] - for name in kwargs: - if name not in names: - raise TypeError("unknown code attribute: %r" %(name, )) - if rec and hasattr(self.raw, 'co_consts'): # jython - newconstlist = [] - co = self.raw - cotype = type(co) - for c in co.co_consts: - if isinstance(c, cotype): - c = self.__class__(c).new(rec=True, **kwargs) - newconstlist.append(c) - return self.new(rec=False, co_consts=tuple(newconstlist), **kwargs) - for name in names: - if name not in kwargs: - kwargs[name] = getattr(self.raw, name) - arglist = [ - kwargs['co_argcount'], - kwargs['co_nlocals'], - kwargs.get('co_stacksize', 0), # jython - kwargs.get('co_flags', 0), # jython - kwargs.get('co_code', ''), # jython - kwargs.get('co_consts', ()), # jython - kwargs.get('co_names', []), # - kwargs['co_varnames'], - kwargs['co_filename'], - kwargs['co_name'], - kwargs['co_firstlineno'], - kwargs.get('co_lnotab', ''), #jython - kwargs.get('co_freevars', None), #jython - kwargs.get('co_cellvars', None), # jython - ] - if sys.version_info >= (3,0): - arglist.insert(1, kwargs['co_kwonlyargcount']) - return self.raw.__class__(*arglist) - else: - return py.std.new.code(*arglist) - - def path(self): - """ return a py.path.local object pointing to the source code """ - fn = self.raw.co_filename - try: - return fn.__path__ - except AttributeError: - p = py.path.local(self.raw.co_filename) - if not p.check(file=1): - # XXX maybe try harder like the weird logic - # in the standard lib [linecache.updatecache] does? - p = self.raw.co_filename - return p - - path = property(path, None, None, "path of this code object") - - def fullsource(self): - """ return a py.code.Source object for the full source file of the code - """ - from _py.code import source - full, _ = source.findsource(self.raw) - return full - fullsource = property(fullsource, None, None, - "full source containing this code object") - - def source(self): - """ return a py.code.Source object for the code object's source only - """ - # return source only for that part of code - return py.code.Source(self.raw) - - def getargs(self): - """ return a tuple with the argument names for the code object - """ - # handfull shortcut for getting args - raw = self.raw - return raw.co_varnames[:raw.co_argcount] - -class Frame(object): - """Wrapper around a Python frame holding f_locals and f_globals - in which expressions can be evaluated.""" - - def __init__(self, frame): - self.code = py.code.Code(frame.f_code) - self.lineno = frame.f_lineno - 1 - self.f_globals = frame.f_globals - self.f_locals = frame.f_locals - self.raw = frame - - def statement(self): - if self.code.fullsource is None: - return py.code.Source("") - return self.code.fullsource.getstatement(self.lineno) - statement = property(statement, None, None, - "statement this frame is at") - - def eval(self, code, **vars): - """ evaluate 'code' in the frame - - 'vars' are optional additional local variables - - returns the result of the evaluation - """ - f_locals = self.f_locals.copy() - f_locals.update(vars) - return eval(code, self.f_globals, f_locals) - - def exec_(self, code, **vars): - """ exec 'code' in the frame - - 'vars' are optiona; additional local variables - """ - f_locals = self.f_locals.copy() - f_locals.update(vars) - py.builtin.exec_(code, self.f_globals, f_locals ) - - def repr(self, object): - """ return a 'safe' (non-recursive, one-line) string repr for 'object' - """ - return safe_repr(object) - - def is_true(self, object): - return object - - def getargs(self): - """ return a list of tuples (name, value) for all arguments - """ - retval = [] - for arg in self.code.getargs(): - try: - retval.append((arg, self.f_locals[arg])) - except KeyError: - pass # this can occur when using Psyco - return retval - -class TracebackEntry(object): - """ a single entry in a traceback """ - - exprinfo = None - - def __init__(self, rawentry): - self._rawentry = rawentry - self.frame = py.code.Frame(rawentry.tb_frame) - # Ugh. 2.4 and 2.5 differs here when encountering - # multi-line statements. Not sure about the solution, but - # should be portable - self.lineno = rawentry.tb_lineno - 1 - self.relline = self.lineno - self.frame.code.firstlineno - - def __repr__(self): - return "" %(self.frame.code.path, self.lineno+1) - - def statement(self): - """ return a py.code.Source object for the current statement """ - source = self.frame.code.fullsource - return source.getstatement(self.lineno) - statement = property(statement, None, None, - "statement of this traceback entry.") - - def path(self): - return self.frame.code.path - path = property(path, None, None, "path to the full source code") - - def getlocals(self): - return self.frame.f_locals - locals = property(getlocals, None, None, "locals of underlaying frame") - - def reinterpret(self): - """Reinterpret the failing statement and returns a detailed information - about what operations are performed.""" - if self.exprinfo is None: - from _py.code import assertion - source = str(self.statement).strip() - x = assertion.interpret(source, self.frame, should_fail=True) - if not isinstance(x, str): - raise TypeError("interpret returned non-string %r" % (x,)) - self.exprinfo = x - return self.exprinfo - - def getfirstlinesource(self): - return self.frame.code.firstlineno - - def getsource(self): - """ return failing source code. """ - source = self.frame.code.fullsource - if source is None: - return None - start = self.getfirstlinesource() - end = self.lineno - try: - _, end = source.getstatementrange(end) - except IndexError: - end = self.lineno + 1 - # heuristic to stop displaying source on e.g. - # if something: # assume this causes a NameError - # # _this_ lines and the one - # below we don't want from entry.getsource() - for i in range(self.lineno, end): - if source[i].rstrip().endswith(':'): - end = i + 1 - break - return source[start:end] - source = property(getsource) - - def ishidden(self): - """ return True if the current frame has a var __tracebackhide__ - resolving to True - - mostly for internal use - """ - try: - return self.frame.eval("__tracebackhide__") - except (SystemExit, KeyboardInterrupt): - raise - except: - return False - - def __str__(self): - try: - fn = str(self.path) - except py.error.Error: - fn = '???' - name = self.frame.code.name - try: - line = str(self.statement).lstrip() - except KeyboardInterrupt: - raise - except: - line = "???" - return " File %r:%d in %s\n %s\n" %(fn, self.lineno+1, name, line) - - def name(self): - return self.frame.code.raw.co_name - name = property(name, None, None, "co_name of underlaying code") - -class Traceback(list): - """ Traceback objects encapsulate and offer higher level - access to Traceback entries. - """ - Entry = TracebackEntry - def __init__(self, tb): - """ initialize from given python traceback object. """ - if hasattr(tb, 'tb_next'): - def f(cur): - while cur is not None: - yield self.Entry(cur) - cur = cur.tb_next - list.__init__(self, f(tb)) - else: - list.__init__(self, tb) - - def cut(self, path=None, lineno=None, firstlineno=None, excludepath=None): - """ return a Traceback instance wrapping part of this Traceback - - by provding any combination of path, lineno and firstlineno, the - first frame to start the to-be-returned traceback is determined - - this allows cutting the first part of a Traceback instance e.g. - for formatting reasons (removing some uninteresting bits that deal - with handling of the exception/traceback) - """ - for x in self: - code = x.frame.code - codepath = code.path - if ((path is None or codepath == path) and - (excludepath is None or (hasattr(codepath, 'relto') and - not codepath.relto(excludepath))) and - (lineno is None or x.lineno == lineno) and - (firstlineno is None or x.frame.code.firstlineno == firstlineno)): - return Traceback(x._rawentry) - return self - - def __getitem__(self, key): - val = super(Traceback, self).__getitem__(key) - if isinstance(key, type(slice(0))): - val = self.__class__(val) - return val - - def filter(self, fn=lambda x: not x.ishidden()): - """ return a Traceback instance with certain items removed - - fn is a function that gets a single argument, a TracebackItem - instance, and should return True when the item should be added - to the Traceback, False when not - - by default this removes all the TracebackItems which are hidden - (see ishidden() above) - """ - return Traceback(filter(fn, self)) - - def getcrashentry(self): - """ return last non-hidden traceback entry that lead - to the exception of a traceback. - """ - tb = self.filter() - if not tb: - tb = self - return tb[-1] - - def recursionindex(self): - """ return the index of the frame/TracebackItem where recursion - originates if appropriate, None if no recursion occurred - """ - cache = {} - for i, entry in enumerate(self): - key = entry.frame.code.path, entry.lineno - #print "checking for recursion at", key - l = cache.setdefault(key, []) - if l: - f = entry.frame - loc = f.f_locals - for otherloc in l: - if f.is_true(f.eval(co_equal, - __recursioncache_locals_1=loc, - __recursioncache_locals_2=otherloc)): - return i - l.append(entry.frame.f_locals) - return None - -co_equal = compile('__recursioncache_locals_1 == __recursioncache_locals_2', - '?', 'eval') - -class ExceptionInfo(object): - """ wraps sys.exc_info() objects and offers - help for navigating the traceback. - """ - _striptext = '' - def __init__(self, tup=None, exprinfo=None): - # NB. all attributes are private! Subclasses or other - # ExceptionInfo-like classes may have different attributes. - if tup is None: - tup = sys.exc_info() - if exprinfo is None and isinstance(tup[1], py.code._AssertionError): - exprinfo = getattr(tup[1], 'msg', None) - if exprinfo is None: - exprinfo = str(tup[1]) - if exprinfo and exprinfo.startswith('assert '): - self._striptext = 'AssertionError: ' - self._excinfo = tup - self.type, self.value, tb = self._excinfo - self.typename = self.type.__name__ - self.traceback = py.code.Traceback(tb) - - def __repr__(self): - return "" % (self.typename, len(self.traceback)) - - def exconly(self, tryshort=False): - """ return the exception as a string - - when 'tryshort' resolves to True, and the exception is a - py.code._AssertionError, only the actual exception part of - the exception representation is returned (so 'AssertionError: ' is - removed from the beginning) - """ - lines = py.std.traceback.format_exception_only(self.type, self.value) - text = ''.join(lines) - text = text.rstrip() - if tryshort: - if text.startswith(self._striptext): - text = text[len(self._striptext):] - return text - - def errisinstance(self, exc): - """ return True if the exception is an instance of exc """ - return isinstance(self.value, exc) - - def _getreprcrash(self): - exconly = self.exconly(tryshort=True) - entry = self.traceback.getcrashentry() - path, lineno = entry.path, entry.lineno - reprcrash = ReprFileLocation(path, lineno+1, exconly) - return reprcrash - - def getrepr(self, showlocals=False, style="long", - abspath=False, tbfilter=True, funcargs=False): - """ return str()able representation of this exception info. - showlocals: show locals per traceback entry - style: long|short|no traceback style - tbfilter: hide entries (where __tracebackhide__ is true) - """ - fmt = FormattedExcinfo(showlocals=showlocals, style=style, - abspath=abspath, tbfilter=tbfilter, funcargs=funcargs) - return fmt.repr_excinfo(self) - - def __str__(self): - entry = self.traceback[-1] - loc = ReprFileLocation(entry.path, entry.lineno + 1, self.exconly()) - return str(loc) - -class FormattedExcinfo(object): - """ presenting information about failing Functions and Generators. """ - # for traceback entries - flow_marker = ">" - fail_marker = "E" - - def __init__(self, showlocals=False, style="long", abspath=True, tbfilter=True, funcargs=False): - self.showlocals = showlocals - self.style = style - self.tbfilter = tbfilter - self.funcargs = funcargs - self.abspath = abspath - - def _getindent(self, source): - # figure out indent for given source - try: - s = str(source.getstatement(len(source)-1)) - except KeyboardInterrupt: - raise - except: - try: - s = str(source[-1]) - except KeyboardInterrupt: - raise - except: - return 0 - return 4 + (len(s) - len(s.lstrip())) - - def _getentrysource(self, entry): - source = entry.getsource() - if source is not None: - source = source.deindent() - return source - - def _saferepr(self, obj): - return safe_repr(obj) - - def repr_args(self, entry): - if self.funcargs: - args = [] - for argname, argvalue in entry.frame.getargs(): - args.append((argname, self._saferepr(argvalue))) - return ReprFuncArgs(args) - - def get_source(self, source, line_index=-1, excinfo=None): - """ return formatted and marked up source lines. """ - lines = [] - if source is None: - source = py.code.Source("???") - line_index = 0 - if line_index < 0: - line_index += len(source) - for i in range(len(source)): - if i == line_index: - prefix = self.flow_marker + " " - else: - prefix = " " - line = prefix + source[i] - lines.append(line) - if excinfo is not None: - indent = self._getindent(source) - lines.extend(self.get_exconly(excinfo, indent=indent, markall=True)) - return lines - - def get_exconly(self, excinfo, indent=4, markall=False): - lines = [] - indent = " " * indent - # get the real exception information out - exlines = excinfo.exconly(tryshort=True).split('\n') - failindent = self.fail_marker + indent[1:] - for line in exlines: - lines.append(failindent + line) - if not markall: - failindent = indent - return lines - - def repr_locals(self, locals): - if self.showlocals: - lines = [] - keys = list(locals) - keys.sort() - for name in keys: - value = locals[name] - if name == '__builtins__': - lines.append("__builtins__ = ") - else: - # This formatting could all be handled by the - # _repr() function, which is only repr.Repr in - # disguise, so is very configurable. - str_repr = self._saferepr(value) - #if len(str_repr) < 70 or not isinstance(value, - # (list, tuple, dict)): - lines.append("%-10s = %s" %(name, str_repr)) - #else: - # self._line("%-10s =\\" % (name,)) - # # XXX - # py.std.pprint.pprint(value, stream=self.excinfowriter) - return ReprLocals(lines) - - def repr_traceback_entry(self, entry, excinfo=None): - # excinfo is not None if this is the last tb entry - source = self._getentrysource(entry) - if source is None: - source = py.code.Source("???") - line_index = 0 - else: - line_index = entry.lineno - entry.getfirstlinesource() - - lines = [] - if self.style == "long": - reprargs = self.repr_args(entry) - lines.extend(self.get_source(source, line_index, excinfo)) - message = excinfo and excinfo.typename or "" - path = self._makepath(entry.path) - filelocrepr = ReprFileLocation(path, entry.lineno+1, message) - localsrepr = self.repr_locals(entry.locals) - return ReprEntry(lines, reprargs, localsrepr, filelocrepr) - else: - if self.style == "short": - line = source[line_index].lstrip() - lines.append(' File "%s", line %d, in %s' % ( - entry.path.basename, entry.lineno+1, entry.name)) - lines.append(" " + line) - if excinfo: - lines.extend(self.get_exconly(excinfo, indent=4)) - return ReprEntry(lines, None, None, None) - - def _makepath(self, path): - if not self.abspath: - np = py.path.local().bestrelpath(path) - if len(np) < len(str(path)): - path = np - return path - - def repr_traceback(self, excinfo): - traceback = excinfo.traceback - if self.tbfilter: - traceback = traceback.filter() - recursionindex = None - if excinfo.errisinstance(RuntimeError): - recursionindex = traceback.recursionindex() - last = traceback[-1] - entries = [] - extraline = None - for index, entry in enumerate(traceback): - einfo = (last == entry) and excinfo or None - reprentry = self.repr_traceback_entry(entry, einfo) - entries.append(reprentry) - if index == recursionindex: - extraline = "!!! Recursion detected (same locals & position)" - break - return ReprTraceback(entries, extraline, style=self.style) - - def repr_excinfo(self, excinfo): - reprtraceback = self.repr_traceback(excinfo) - reprcrash = excinfo._getreprcrash() - return ReprExceptionInfo(reprtraceback, reprcrash) - -class TerminalRepr: - def __str__(self): - tw = py.io.TerminalWriter(stringio=True) - self.toterminal(tw) - return tw.stringio.getvalue().strip() - - def __repr__(self): - return "<%s instance at %0x>" %(self.__class__, id(self)) - -class ReprExceptionInfo(TerminalRepr): - def __init__(self, reprtraceback, reprcrash): - self.reprtraceback = reprtraceback - self.reprcrash = reprcrash - self.sections = [] - - def addsection(self, name, content, sep="-"): - self.sections.append((name, content, sep)) - - def toterminal(self, tw): - self.reprtraceback.toterminal(tw) - for name, content, sep in self.sections: - tw.sep(sep, name) - tw.line(content) - -class ReprTraceback(TerminalRepr): - entrysep = "_ " - - def __init__(self, reprentries, extraline, style): - self.reprentries = reprentries - self.extraline = extraline - self.style = style - - def toterminal(self, tw): - sepok = False - for entry in self.reprentries: - if self.style == "long": - if sepok: - tw.sep(self.entrysep) - tw.line("") - sepok = True - entry.toterminal(tw) - if self.extraline: - tw.line(self.extraline) - -class ReprEntry(TerminalRepr): - localssep = "_ " - - def __init__(self, lines, reprfuncargs, reprlocals, filelocrepr): - self.lines = lines - self.reprfuncargs = reprfuncargs - self.reprlocals = reprlocals - self.reprfileloc = filelocrepr - - def toterminal(self, tw): - if self.reprfuncargs: - self.reprfuncargs.toterminal(tw) - for line in self.lines: - red = line.startswith("E ") - tw.line(line, bold=True, red=red) - if self.reprlocals: - #tw.sep(self.localssep, "Locals") - tw.line("") - self.reprlocals.toterminal(tw) - if self.reprfileloc: - tw.line("") - self.reprfileloc.toterminal(tw) - - def __str__(self): - return "%s\n%s\n%s" % ("\n".join(self.lines), - self.reprlocals, - self.reprfileloc) - -class ReprFileLocation(TerminalRepr): - def __init__(self, path, lineno, message): - self.path = str(path) - self.lineno = lineno - self.message = message - - def toterminal(self, tw): - # filename and lineno output for each entry, - # using an output format that most editors unterstand - msg = self.message - i = msg.find("\n") - if i != -1: - msg = msg[:i] - tw.line("%s:%s: %s" %(self.path, self.lineno, msg)) - -class ReprLocals(TerminalRepr): - def __init__(self, lines): - self.lines = lines - - def toterminal(self, tw): - for line in self.lines: - tw.line(line) - -class ReprFuncArgs(TerminalRepr): - def __init__(self, args): - self.args = args - - def toterminal(self, tw): - if self.args: - linesofar = "" - for name, value in self.args: - ns = "%s = %s" %(name, value) - if len(ns) + len(linesofar) + 2 > tw.fullwidth: - if linesofar: - tw.line(linesofar) - linesofar = ns - else: - if linesofar: - linesofar += ", " + ns - else: - linesofar = ns - if linesofar: - tw.line(linesofar) - tw.line("") - - - -class SafeRepr(repr.Repr): - """ subclass of repr.Repr that limits the resulting size of repr() - and includes information on exceptions raised during the call. - """ - def __init__(self, *args, **kwargs): - repr.Repr.__init__(self, *args, **kwargs) - self.maxstring = 240 # 3 * 80 chars - self.maxother = 160 # 2 * 80 chars - - def repr(self, x): - return self._callhelper(repr.Repr.repr, self, x) - - def repr_instance(self, x, level): - return self._callhelper(builtin_repr, x) - - def _callhelper(self, call, x, *args): - try: - # Try the vanilla repr and make sure that the result is a string - s = call(x, *args) - except (KeyboardInterrupt, MemoryError, SystemExit): - raise - except: - cls, e, tb = sys.exc_info() - try: - exc_name = cls.__name__ - except: - exc_name = 'unknown' - try: - exc_info = str(e) - except: - exc_info = 'unknown' - return '<[%s("%s") raised in repr()] %s object at 0x%x>' % ( - exc_name, exc_info, x.__class__.__name__, id(x)) - else: - if len(s) > self.maxstring: - i = max(0, (self.maxstring-3)//2) - j = max(0, self.maxstring-3-i) - s = s[:i] + '...' + s[len(s)-j:] - return s - -safe_repr = SafeRepr().repr - -oldbuiltins = {} - -def patch_builtins(assertion=True, compile=True): - """ put compile and AssertionError builtins to Python's builtins. """ - if assertion: - from _py.code import assertion - l = oldbuiltins.setdefault('AssertionError', []) - l.append(py.builtin.builtins.AssertionError) - py.builtin.builtins.AssertionError = assertion.AssertionError - if compile: - l = oldbuiltins.setdefault('compile', []) - l.append(py.builtin.builtins.compile) - py.builtin.builtins.compile = py.code.compile - -def unpatch_builtins(assertion=True, compile=True): - """ remove compile and AssertionError builtins from Python builtins. """ - if assertion: - py.builtin.builtins.AssertionError = oldbuiltins['AssertionError'].pop() - if compile: - py.builtin.builtins.compile = oldbuiltins['compile'].pop() - -def getrawcode(obj): - """ return code object for given function. """ - obj = getattr(obj, 'im_func', obj) - obj = getattr(obj, 'func_code', obj) - obj = getattr(obj, 'f_code', obj) - obj = getattr(obj, '__code__', obj) - return obj - --- a/_py/path/__init__.py +++ /dev/null @@ -1,1 +0,0 @@ -""" unified file system api """ --- a/_py/test/__init__.py +++ /dev/null @@ -1,1 +0,0 @@ -""" versatile unit-testing tool + libraries """ --- a/_py/cmdline/pytest.py +++ /dev/null @@ -1,5 +0,0 @@ -#!/usr/bin/env python -import py - -def main(): - py.test.cmdline.main() --- a/_py/log/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -""" logging API ('producers' and 'consumers' connected via keywords) """ - --- a/_py/process/__init__.py +++ /dev/null @@ -1,1 +0,0 @@ -""" high-level sub-process handling """ --- a/_py/code/_assertionold.py +++ /dev/null @@ -1,558 +0,0 @@ -import py -import sys, inspect -from compiler import parse, ast, pycodegen -from _py.code.assertion import BuiltinAssertionError, _format_explanation - -passthroughex = (KeyboardInterrupt, SystemExit, MemoryError) - -class Failure: - def __init__(self, node): - self.exc, self.value, self.tb = sys.exc_info() - self.node = node - -class View(object): - """View base class. - - If C is a subclass of View, then C(x) creates a proxy object around - the object x. The actual class of the proxy is not C in general, - but a *subclass* of C determined by the rules below. To avoid confusion - we call view class the class of the proxy (a subclass of C, so of View) - and object class the class of x. - - Attributes and methods not found in the proxy are automatically read on x. - Other operations like setting attributes are performed on the proxy, as - determined by its view class. The object x is available from the proxy - as its __obj__ attribute. - - The view class selection is determined by the __view__ tuples and the - optional __viewkey__ method. By default, the selected view class is the - most specific subclass of C whose __view__ mentions the class of x. - If no such subclass is found, the search proceeds with the parent - object classes. For example, C(True) will first look for a subclass - of C with __view__ = (..., bool, ...) and only if it doesn't find any - look for one with __view__ = (..., int, ...), and then ..., object,... - If everything fails the class C itself is considered to be the default. - - Alternatively, the view class selection can be driven by another aspect - of the object x, instead of the class of x, by overriding __viewkey__. - See last example at the end of this module. - """ - - _viewcache = {} - __view__ = () - - def __new__(rootclass, obj, *args, **kwds): - self = object.__new__(rootclass) - self.__obj__ = obj - self.__rootclass__ = rootclass - key = self.__viewkey__() - try: - self.__class__ = self._viewcache[key] - except KeyError: - self.__class__ = self._selectsubclass(key) - return self - - def __getattr__(self, attr): - # attributes not found in the normal hierarchy rooted on View - # are looked up in the object's real class - return getattr(self.__obj__, attr) - - def __viewkey__(self): - return self.__obj__.__class__ - - def __matchkey__(self, key, subclasses): - if inspect.isclass(key): - keys = inspect.getmro(key) - else: - keys = [key] - for key in keys: - result = [C for C in subclasses if key in C.__view__] - if result: - return result - return [] - - def _selectsubclass(self, key): - subclasses = list(enumsubclasses(self.__rootclass__)) - for C in subclasses: - if not isinstance(C.__view__, tuple): - C.__view__ = (C.__view__,) - choices = self.__matchkey__(key, subclasses) - if not choices: - return self.__rootclass__ - elif len(choices) == 1: - return choices[0] - else: - # combine the multiple choices - return type('?', tuple(choices), {}) - - def __repr__(self): - return '%s(%r)' % (self.__rootclass__.__name__, self.__obj__) - - -def enumsubclasses(cls): - for subcls in cls.__subclasses__(): - for subsubclass in enumsubclasses(subcls): - yield subsubclass - yield cls - - -class Interpretable(View): - """A parse tree node with a few extra methods.""" - explanation = None - - def is_builtin(self, frame): - return False - - def eval(self, frame): - # fall-back for unknown expression nodes - try: - expr = ast.Expression(self.__obj__) - expr.filename = '' - self.__obj__.filename = '' - co = pycodegen.ExpressionCodeGenerator(expr).getCode() - result = frame.eval(co) - except passthroughex: - raise - except: - raise Failure(self) - self.result = result - self.explanation = self.explanation or frame.repr(self.result) - - def run(self, frame): - # fall-back for unknown statement nodes - try: - expr = ast.Module(None, ast.Stmt([self.__obj__])) - expr.filename = '' - co = pycodegen.ModuleCodeGenerator(expr).getCode() - frame.exec_(co) - except passthroughex: - raise - except: - raise Failure(self) - - def nice_explanation(self): - return _format_explanation(self.explanation) - - -class Name(Interpretable): - __view__ = ast.Name - - def is_local(self, frame): - co = compile('%r in locals() is not globals()' % self.name, '?', 'eval') - try: - return frame.is_true(frame.eval(co)) - except passthroughex: - raise - except: - return False - - def is_global(self, frame): - co = compile('%r in globals()' % self.name, '?', 'eval') - try: - return frame.is_true(frame.eval(co)) - except passthroughex: - raise - except: - return False - - def is_builtin(self, frame): - co = compile('%r not in locals() and %r not in globals()' % ( - self.name, self.name), '?', 'eval') - try: - return frame.is_true(frame.eval(co)) - except passthroughex: - raise - except: - return False - - def eval(self, frame): - super(Name, self).eval(frame) - if not self.is_local(frame): - self.explanation = self.name - -class Compare(Interpretable): - __view__ = ast.Compare - - def eval(self, frame): - expr = Interpretable(self.expr) - expr.eval(frame) - for operation, expr2 in self.ops: - if hasattr(self, 'result'): - # shortcutting in chained expressions - if not frame.is_true(self.result): - break - expr2 = Interpretable(expr2) - expr2.eval(frame) - self.explanation = "%s %s %s" % ( - expr.explanation, operation, expr2.explanation) - co = compile("__exprinfo_left %s __exprinfo_right" % operation, - '?', 'eval') - try: - self.result = frame.eval(co, __exprinfo_left=expr.result, - __exprinfo_right=expr2.result) - except passthroughex: - raise - except: - raise Failure(self) - expr = expr2 - -class And(Interpretable): - __view__ = ast.And - - def eval(self, frame): - explanations = [] - for expr in self.nodes: - expr = Interpretable(expr) - expr.eval(frame) - explanations.append(expr.explanation) - self.result = expr.result - if not frame.is_true(expr.result): - break - self.explanation = '(' + ' and '.join(explanations) + ')' - -class Or(Interpretable): - __view__ = ast.Or - - def eval(self, frame): - explanations = [] - for expr in self.nodes: - expr = Interpretable(expr) - expr.eval(frame) - explanations.append(expr.explanation) - self.result = expr.result - if frame.is_true(expr.result): - break - self.explanation = '(' + ' or '.join(explanations) + ')' - - -# == Unary operations == -keepalive = [] -for astclass, astpattern in { - ast.Not : 'not __exprinfo_expr', - ast.Invert : '(~__exprinfo_expr)', - }.items(): - - class UnaryArith(Interpretable): - __view__ = astclass - - def eval(self, frame, astpattern=astpattern, - co=compile(astpattern, '?', 'eval')): - expr = Interpretable(self.expr) - expr.eval(frame) - self.explanation = astpattern.replace('__exprinfo_expr', - expr.explanation) - try: - self.result = frame.eval(co, __exprinfo_expr=expr.result) - except passthroughex: - raise - except: - raise Failure(self) - - keepalive.append(UnaryArith) - -# == Binary operations == -for astclass, astpattern in { - ast.Add : '(__exprinfo_left + __exprinfo_right)', - ast.Sub : '(__exprinfo_left - __exprinfo_right)', - ast.Mul : '(__exprinfo_left * __exprinfo_right)', - ast.Div : '(__exprinfo_left / __exprinfo_right)', - ast.Mod : '(__exprinfo_left % __exprinfo_right)', - ast.Power : '(__exprinfo_left ** __exprinfo_right)', - }.items(): - - class BinaryArith(Interpretable): - __view__ = astclass - - def eval(self, frame, astpattern=astpattern, - co=compile(astpattern, '?', 'eval')): - left = Interpretable(self.left) - left.eval(frame) - right = Interpretable(self.right) - right.eval(frame) - self.explanation = (astpattern - .replace('__exprinfo_left', left .explanation) - .replace('__exprinfo_right', right.explanation)) - try: - self.result = frame.eval(co, __exprinfo_left=left.result, - __exprinfo_right=right.result) - except passthroughex: - raise - except: - raise Failure(self) - - keepalive.append(BinaryArith) - - -class CallFunc(Interpretable): - __view__ = ast.CallFunc - - def is_bool(self, frame): - co = compile('isinstance(__exprinfo_value, bool)', '?', 'eval') - try: - return frame.is_true(frame.eval(co, __exprinfo_value=self.result)) - except passthroughex: - raise - except: - return False - - def eval(self, frame): - node = Interpretable(self.node) - node.eval(frame) - explanations = [] - vars = {'__exprinfo_fn': node.result} - source = '__exprinfo_fn(' - for a in self.args: - if isinstance(a, ast.Keyword): - keyword = a.name - a = a.expr - else: - keyword = None - a = Interpretable(a) - a.eval(frame) - argname = '__exprinfo_%d' % len(vars) - vars[argname] = a.result - if keyword is None: - source += argname + ',' - explanations.append(a.explanation) - else: - source += '%s=%s,' % (keyword, argname) - explanations.append('%s=%s' % (keyword, a.explanation)) - if self.star_args: - star_args = Interpretable(self.star_args) - star_args.eval(frame) - argname = '__exprinfo_star' - vars[argname] = star_args.result - source += '*' + argname + ',' - explanations.append('*' + star_args.explanation) - if self.dstar_args: - dstar_args = Interpretable(self.dstar_args) - dstar_args.eval(frame) - argname = '__exprinfo_kwds' - vars[argname] = dstar_args.result - source += '**' + argname + ',' - explanations.append('**' + dstar_args.explanation) - self.explanation = "%s(%s)" % ( - node.explanation, ', '.join(explanations)) - if source.endswith(','): - source = source[:-1] - source += ')' - co = compile(source, '?', 'eval') - try: - self.result = frame.eval(co, **vars) - except passthroughex: - raise - except: - raise Failure(self) - if not node.is_builtin(frame) or not self.is_bool(frame): - r = frame.repr(self.result) - self.explanation = '%s\n{%s = %s\n}' % (r, r, self.explanation) - -class Getattr(Interpretable): - __view__ = ast.Getattr - - def eval(self, frame): - expr = Interpretable(self.expr) - expr.eval(frame) - co = compile('__exprinfo_expr.%s' % self.attrname, '?', 'eval') - try: - self.result = frame.eval(co, __exprinfo_expr=expr.result) - except passthroughex: - raise - except: - raise Failure(self) - self.explanation = '%s.%s' % (expr.explanation, self.attrname) - # if the attribute comes from the instance, its value is interesting - co = compile('hasattr(__exprinfo_expr, "__dict__") and ' - '%r in __exprinfo_expr.__dict__' % self.attrname, - '?', 'eval') - try: - from_instance = frame.is_true( - frame.eval(co, __exprinfo_expr=expr.result)) - except passthroughex: - raise - except: - from_instance = True - if from_instance: - r = frame.repr(self.result) - self.explanation = '%s\n{%s = %s\n}' % (r, r, self.explanation) - -# == Re-interpretation of full statements == - -class Assert(Interpretable): - __view__ = ast.Assert - - def run(self, frame): - test = Interpretable(self.test) - test.eval(frame) - # simplify 'assert False where False = ...' - if (test.explanation.startswith('False\n{False = ') and - test.explanation.endswith('\n}')): - test.explanation = test.explanation[15:-2] - # print the result as 'assert ' - self.result = test.result - self.explanation = 'assert ' + test.explanation - if not frame.is_true(test.result): - try: - raise BuiltinAssertionError - except passthroughex: - raise - except: - raise Failure(self) - -class Assign(Interpretable): - __view__ = ast.Assign - - def run(self, frame): - expr = Interpretable(self.expr) - expr.eval(frame) - self.result = expr.result - self.explanation = '... = ' + expr.explanation - # fall-back-run the rest of the assignment - ass = ast.Assign(self.nodes, ast.Name('__exprinfo_expr')) - mod = ast.Module(None, ast.Stmt([ass])) - mod.filename = '' - co = pycodegen.ModuleCodeGenerator(mod).getCode() - try: - frame.exec_(co, __exprinfo_expr=expr.result) - except passthroughex: - raise - except: - raise Failure(self) - -class Discard(Interpretable): - __view__ = ast.Discard - - def run(self, frame): - expr = Interpretable(self.expr) - expr.eval(frame) - self.result = expr.result - self.explanation = expr.explanation - -class Stmt(Interpretable): - __view__ = ast.Stmt - - def run(self, frame): - for stmt in self.nodes: - stmt = Interpretable(stmt) - stmt.run(frame) - - -def report_failure(e): - explanation = e.node.nice_explanation() - if explanation: - explanation = ", in: " + explanation - else: - explanation = "" - sys.stdout.write("%s: %s%s\n" % (e.exc.__name__, e.value, explanation)) - -def check(s, frame=None): - if frame is None: - import sys - frame = sys._getframe(1) - frame = py.code.Frame(frame) - expr = parse(s, 'eval') - assert isinstance(expr, ast.Expression) - node = Interpretable(expr.node) - try: - node.eval(frame) - except passthroughex: - raise - except Failure: - e = sys.exc_info()[1] - report_failure(e) - else: - if not frame.is_true(node.result): - sys.stderr.write("assertion failed: %s\n" % node.nice_explanation()) - - -########################################################### -# API / Entry points -# ######################################################### - -def interpret(source, frame, should_fail=False): - module = Interpretable(parse(source, 'exec').node) - #print "got module", module - if isinstance(frame, py.std.types.FrameType): - frame = py.code.Frame(frame) - try: - module.run(frame) - except Failure: - e = sys.exc_info()[1] - return getfailure(e) - except passthroughex: - raise - except: - import traceback - traceback.print_exc() - if should_fail: - return ("(assertion failed, but when it was re-run for " - "printing intermediate values, it did not fail. Suggestions: " - "compute assert expression before the assert or use --nomagic)") - else: - return None - -def getmsg(excinfo): - if isinstance(excinfo, tuple): - excinfo = py.code.ExceptionInfo(excinfo) - #frame, line = gettbline(tb) - #frame = py.code.Frame(frame) - #return interpret(line, frame) - - tb = excinfo.traceback[-1] - source = str(tb.statement).strip() - x = interpret(source, tb.frame, should_fail=True) - if not isinstance(x, str): - raise TypeError("interpret returned non-string %r" % (x,)) - return x - -def getfailure(e): - explanation = e.node.nice_explanation() - if str(e.value): - lines = explanation.split('\n') - lines[0] += " << %s" % (e.value,) - explanation = '\n'.join(lines) - text = "%s: %s" % (e.exc.__name__, explanation) - if text.startswith('AssertionError: assert '): - text = text[16:] - return text - -def run(s, frame=None): - if frame is None: - import sys - frame = sys._getframe(1) - frame = py.code.Frame(frame) - module = Interpretable(parse(s, 'exec').node) - try: - module.run(frame) - except Failure: - e = sys.exc_info()[1] - report_failure(e) - - -if __name__ == '__main__': - # example: - def f(): - return 5 - def g(): - return 3 - def h(x): - return 'never' - check("f() * g() == 5") - check("not f()") - check("not (f() and g() or 0)") - check("f() == g()") - i = 4 - check("i == f()") - check("len(f()) == 0") - check("isinstance(2+3+4, float)") - - run("x = i") - check("x == 5") - - run("assert not f(), 'oops'") - run("a, b, c = 1, 2") - run("a, b, c = f()") - - check("max([f(),g()]) == 4") - check("'hello'[g()] == 'h'") - run("'guk%d' % h(f())") --- a/_py/cmdline/pylookup.py +++ /dev/null @@ -1,83 +0,0 @@ -#!/usr/bin/env python - -"""\ -py.lookup [search_directory] SEARCH_STRING [options] - -Looks recursively at Python files for a SEARCH_STRING, starting from the -present working directory. Prints the line, with the filename and line-number -prepended.""" - -import sys, os -import py -from _py.io.terminalwriter import ansi_print, terminal_width -import re - -def rec(p): - return p.check(dotfile=0) - -parser = py.std.optparse.OptionParser(usage=__doc__) -parser.add_option("-i", "--ignore-case", action="store_true", dest="ignorecase", - help="ignore case distinctions") -parser.add_option("-C", "--context", action="store", type="int", dest="context", - default=0, help="How many lines of output to show") - -def find_indexes(search_line, string): - indexes = [] - before = 0 - while 1: - i = search_line.find(string, before) - if i == -1: - break - indexes.append(i) - before = i + len(string) - return indexes - -def main(): - (options, args) = parser.parse_args() - if len(args) == 2: - search_dir, string = args - search_dir = py.path.local(search_dir) - else: - search_dir = py.path.local() - string = args[0] - if options.ignorecase: - string = string.lower() - for x in search_dir.visit('*.py', rec): - # match filename directly - s = x.relto(search_dir) - if options.ignorecase: - s = s.lower() - if s.find(string) != -1: - sys.stdout.write("%s: filename matches %r" %(x, string) + "\n") - - try: - s = x.read() - except py.error.ENOENT: - pass # whatever, probably broken link (ie emacs lock) - searchs = s - if options.ignorecase: - searchs = s.lower() - if s.find(string) != -1: - lines = s.splitlines() - if options.ignorecase: - searchlines = s.lower().splitlines() - else: - searchlines = lines - for i, (line, searchline) in enumerate(zip(lines, searchlines)): - indexes = find_indexes(searchline, string) - if not indexes: - continue - if not options.context: - sys.stdout.write("%s:%d: " %(x.relto(search_dir), i+1)) - last_index = 0 - for index in indexes: - sys.stdout.write(line[last_index: index]) - ansi_print(line[index: index+len(string)], - file=sys.stdout, esc=31, newline=False) - last_index = index + len(string) - sys.stdout.write(line[last_index:] + "\n") - else: - context = (options.context)/2 - for count in range(max(0, i-context), min(len(lines) - 1, i+context+1)): - print("%s:%d: %s" %(x.relto(search_dir), count+1, lines[count].rstrip())) - print("-" * terminal_width) --- a/_py/_com.py +++ /dev/null @@ -1,125 +0,0 @@ -""" -py lib plugins and plugin call management -""" - -import py -import inspect - -__all__ = ['Registry', 'MultiCall', 'comregistry', 'HookRelay'] - -class MultiCall: - """ execute a call into multiple python functions/methods. """ - - def __init__(self, methods, kwargs, firstresult=False): - self.methods = methods[:] - self.kwargs = kwargs.copy() - self.kwargs['__multicall__'] = self - self.results = [] - self.firstresult = firstresult - - def __repr__(self): - status = "%d results, %d meths" % (len(self.results), len(self.methods)) - return "" %(status, self.kwargs) - - def execute(self): - while self.methods: - method = self.methods.pop() - kwargs = self.getkwargs(method) - res = method(**kwargs) - if res is not None: - self.results.append(res) - if self.firstresult: - return res - if not self.firstresult: - return self.results - - def getkwargs(self, method): - kwargs = {} - for argname in varnames(method): - try: - kwargs[argname] = self.kwargs[argname] - except KeyError: - pass # might be optional param - return kwargs - -def varnames(func): - ismethod = inspect.ismethod(func) - rawcode = py.code.getrawcode(func) - try: - return rawcode.co_varnames[ismethod:] - except AttributeError: - return () - -class Registry: - """ - Manage Plugins: register/unregister call calls to plugins. - """ - def __init__(self, plugins=None): - if plugins is None: - plugins = [] - self._plugins = plugins - - def register(self, plugin): - assert not isinstance(plugin, str) - assert not plugin in self._plugins - self._plugins.append(plugin) - - def unregister(self, plugin): - self._plugins.remove(plugin) - - def isregistered(self, plugin): - return plugin in self._plugins - - def __iter__(self): - return iter(self._plugins) - - def listattr(self, attrname, plugins=None, extra=(), reverse=False): - l = [] - if plugins is None: - plugins = self._plugins - candidates = list(plugins) + list(extra) - for plugin in candidates: - try: - l.append(getattr(plugin, attrname)) - except AttributeError: - continue - if reverse: - l.reverse() - return l - -class HookRelay: - def __init__(self, hookspecs, registry): - self._hookspecs = hookspecs - self._registry = registry - for name, method in vars(hookspecs).items(): - if name[:1] != "_": - setattr(self, name, self._makecall(name)) - - def _makecall(self, name, extralookup=None): - hookspecmethod = getattr(self._hookspecs, name) - firstresult = getattr(hookspecmethod, 'firstresult', False) - return HookCaller(self, name, firstresult=firstresult, - extralookup=extralookup) - - def _getmethods(self, name, extralookup=()): - return self._registry.listattr(name, extra=extralookup) - - def _performcall(self, name, multicall): - return multicall.execute() - -class HookCaller: - def __init__(self, hookrelay, name, firstresult, extralookup=None): - self.hookrelay = hookrelay - self.name = name - self.firstresult = firstresult - self.extralookup = extralookup and [extralookup] or () - - def __repr__(self): - return "" %(self.name,) - - def __call__(self, **kwargs): - methods = self.hookrelay._getmethods(self.name, self.extralookup) - mc = MultiCall(methods, kwargs, firstresult=self.firstresult) - return self.hookrelay._performcall(self.name, mc) - -comregistry = Registry([]) From commits-noreply at bitbucket.org Thu Nov 5 03:19:28 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Thu, 5 Nov 2009 02:19:28 +0000 (UTC) Subject: [py-svn] py-trunk commit ae71a1d63d0d: largely improve and reshuffle docs, heading strongly towards a 1.1.0 Message-ID: <20091105021928.A12517EEE5@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1257387535 -3600 # Node ID ae71a1d63d0d79f61a5f047d4c38c0f457ba038a # Parent e024990032907e7def8bbe8e785dc26ef94ce38e largely improve and reshuffle docs, heading strongly towards a 1.1.0 --- a/doc/index.txt +++ b/doc/index.txt @@ -29,7 +29,6 @@ Other (minor) support functionality For the latest Release, see `PyPI project page`_ -.. _`download and installation`: download.html .. _`py-dev at codespeak net`: http://codespeak.net/mailman/listinfo/py-dev .. _`py.log`: log.html .. _`py.io`: io.html --- a/doc/test/funcargs.txt +++ b/doc/test/funcargs.txt @@ -1,43 +1,301 @@ -========================================================== -**funcargs**: test function arguments FTW -========================================================== +============================================================== +**funcargs**: advanced test setup and parametrization +============================================================== .. contents:: :local: :depth: 2 -Goals of the "funcarg" mechanism -========================================== +what are "funcargs" and what are they good for? +================================================= -Since version 1.0 py.test features the "funcarg" mechanism which -allows a Python test function to take arguments independently provided -by factory functions. Factory functions allow to encapsulate -all setup and fixture glue code into nicely separated objects -and provide a natural way for writing python test functions. -Compared to `xUnit style`_ the new mechanism is meant to: - -* make test functions easier to write and to read -* isolate test fixture creation to a single place -* bring new flexibility and power to test state management -* naturally extend towards parametrizing test functions - with multiple argument sets -* enable creation of zero-boilerplate test helper objects that - interact with the execution of a test function, see the - `blog post about the monkeypatch funcarg`_. - -If you find issues or have further suggestions for improving -the mechanism you are welcome to checkout `contact possibilities`_ page. +Named parameters of a test function are called *funcargs* for short. +A Funcarg can be a simple number of a complex object. To perform a +test function call each parameter is setup by a factory function. +To call a test function repeatedly with different funcargs sets +test parameters can be generated. .. _`contact possibilities`: ../contact.html +.. _`parametrizing tests, generalized`: http://tetamap.wordpress.com/2009/05/13/parametrizing-python-tests-generalized/ .. _`blog post about the monkeypatch funcarg`: http://tetamap.wordpress.com/2009/03/03/monkeypatching-in-unit-tests-done-right/ .. _`xUnit style`: xunit_setup.html + +.. _`funcarg factory`: +.. _factory: + +funcarg factories: setting up test function arguments +============================================================== + +Test functions can specify one ore more arguments ("funcargs") +and a test module or plugin can define factory functions that provide +the function argument. Let's look at a simple self-contained +example that you can put into a test module: + +.. sourcecode:: python + + # ./test_simplefactory.py + def pytest_funcarg__myfuncarg(request): + return 42 + + def test_function(myfuncarg): + assert myfuncarg == 17 + +If you run this with ``py.test test_simplefactory.py`` you see something like this: + +.. sourcecode:: python + + =========================== test session starts ============================ + python: platform linux2 -- Python 2.6.2 + test object 1: /home/hpk/hg/py/trunk/example/funcarg/test_simplefactory.py + + test_simplefactory.py F + + ================================ FAILURES ================================== + ______________________________ test_function _______________________________ + + myfuncarg = 42 + + def test_function(myfuncarg): + > assert myfuncarg == 17 + E assert 42 == 17 + + test_simplefactory.py:6: AssertionError + ======================== 1 failed in 0.11 seconds ========================== + + +This means that the test function was called with a ``myfuncarg`` value +of ``42`` and the assert fails accordingly. Here is how py.test +calls the test function: + +1. py.test discovers the ``test_function`` because of the ``test_`` prefix. + The test function needs a function argument named ``myfuncarg``. + A matching factory function is discovered by looking for the + name ``pytest_funcarg__myfuncarg``. + +2. ``pytest_funcarg__myfuncarg(request)`` is called and + returns the value for ``myfuncarg``. + +3. ``test_function(42)`` call is executed. + +Note that if you misspell a function argument or want +to use one that isn't available, you'll see an error +with a list of available function arguments. + +factory functions receive a `request object`_ +which they can use to register setup/teardown +functions or access meta data about a test. + +.. _`request object`: + +funcarg factory request objects +------------------------------------------ + +Request objects are passed to funcarg factories and allow +to access test configuration, test context and `useful caching +and finalization helpers`_. Here is a list of attributes: + +``request.function``: python function object requesting the argument + +``request.cls``: class object where the test function is defined in or None. + +``request.module``: module object where the test function is defined in. + +``request.config``: access to command line opts and general config + +``request.param``: if exists was passed by a previous `metafunc.addcall`_ + +.. _`useful caching and finalization helpers`: + + +registering funcarg related finalizers/cleanup +---------------------------------------------------- + +.. sourcecode:: python + + def addfinalizer(func): + """ call a finalizer function when test function finishes. """ + +Calling ``request.addfinalizer()`` is useful for scheduling teardown +functions. Here is an example for providing a ``myfile`` +object that is to be closed when the execution of a +test function finishes. + +.. sourcecode:: python + + def pytest_funcarg__myfile(self, request): + # ... create and open a unique per-function "myfile" object ... + request.addfinalizer(lambda: myfile.close()) + return myfile + + +managing fixtures across test modules and test runs +---------------------------------------------------------- + +.. sourcecode:: python + + def cached_setup(setup, teardown=None, scope="module", extrakey=None): + """ cache and return result of calling setup(). + + The requested argument name, the scope and the ``extrakey`` + determine the cache key. The scope also determines when + teardown(result) will be called. valid scopes are: + scope == 'function': when the single test function run finishes. + scope == 'module': when tests in a different module are run + scope == 'session': when tests of the session have run. + """ + +Calling ``request.cached_setup()`` helps you to manage fixture +objects across several scopes. For example, for creating a Database object +that is to be setup only once during a test session you can use the helper +like this: + +.. sourcecode:: python + + def pytest_funcarg__database(request): + return request.cached_setup( + setup=lambda: Database("..."), + teardown=lambda val: val.close(), + scope="session" + ) + + +requesting values of other funcargs +--------------------------------------------- + +.. sourcecode:: python + + def getfuncargvalue(name): + """ Lookup and call function argument factory for the given name. + Each function argument is only created once per function setup. + """ + +``request.getfuncargvalue(name)`` calls another funcarg factory function. +You can use this function if you want to `decorate a funcarg`_, i.e. +you want to provide the "normal" value but add something +extra. If a factory cannot be found a ``request.Error`` +exception will be raised. + +.. _`test generators`: +.. _`parametrizing-tests`: + +generating parametrized tests +=========================================================== + +You can parametrize multiple runs of the same test +function by adding new test function calls with different +function argument values. Let's look at a simple self-contained +example: + +.. sourcecode:: python + + # ./test_example.py + def pytest_generate_tests(metafunc): + if "numiter" in metafunc.funcargnames: + for i in range(10): + metafunc.addcall(funcargs=dict(numiter=i)) + + def test_func(numiter): + assert numiter < 9 + +If you run this with ``py.test test_example.py`` you'll get: + +.. sourcecode:: python + + ============================= test session starts ========================== + python: platform linux2 -- Python 2.6.2 + test object 1: /home/hpk/hg/py/trunk/test_example.py + + test_example.py .........F + + ================================ FAILURES ================================== + __________________________ test_func.test_func[9] __________________________ + + numiter = 9 + + def test_func(numiter): + > assert numiter < 9 + E assert 9 < 9 + + /home/hpk/hg/py/trunk/test_example.py:10: AssertionError + + +Here is what happens in detail: + +1. ``pytest_generate_tests(metafunc)`` hook is called once for each test + function. It adds ten new function calls with explicit function arguments. + +2. **execute tests**: ``test_func(numiter)`` is called ten times with + ten different arguments. + +.. _`metafunc object`: + +test generators and metafunc objects +------------------------------------------- + +metafunc objects are passed to the ``pytest_generate_tests`` hook. +They help to inspect a testfunction and to generate tests +according to test configuration or values specified +in the class or module where a test function is defined: + +``metafunc.funcargnames``: set of required function arguments for given function + +``metafunc.function``: underlying python test function + +``metafunc.cls``: class object where the test function is defined in or None. + +``metafunc.module``: the module object where the test function is defined in. + +``metafunc.config``: access to command line opts and general config + + +.. _`metafunc.addcall`: + +the ``metafunc.addcall()`` method +----------------------------------------------- + +.. sourcecode:: python + + def addcall(funcargs={}, id=None, param=None): + """ trigger a new test function call. """ + +``funcargs`` can be a dictionary of argument names +mapped to values - providing it is called *direct parametrization*. + +If you provide an `id`` it will be used for reporting +and identification purposes. If you don't supply an `id` +the stringified counter of the list of added calls will be used. +``id`` values needs to be unique between all +invocations for a given test function. + +``param`` if specified will be seen by any +`funcarg factory`_ as a ``request.param`` attribute. +Setting it is called *indirect parametrization*. + +Indirect parametrization is preferable if test values are +expensive to setup or can only be created in certain environments. +Test generators and thus ``addcall()`` invocations are performed +during test collection which is separate from the actual test +setup and test run phase. With distributed testing collection +and test setup/run happens in different process. + + + .. _`tutorial examples`: Tutorial Examples ======================================= +To see how you can implement custom paramtrization schemes, +see e.g. `parametrizing tests, generalized`_ (blog post). + +To enable creation of test support code that can flexibly +register setup/teardown functions see the `blog post about +the monkeypatch funcarg`_. + +If you find issues or have further suggestions for improving +the mechanism you are welcome to checkout `contact possibilities`_ page. .. _`application setup tutorial example`: .. _appsetup: @@ -274,262 +532,3 @@ methods in a convenient way. .. _`py.path.local`: ../path.html#local .. _`conftest plugin`: customize.html#conftestplugin - -.. _`funcarg factory`: -.. _factory: - -funcarg factories: setting up test function arguments -============================================================== - -Test functions can specify one ore more arguments ("funcargs") -and a test module or plugin can define functions that provide -the function argument. Let's look at a simple self-contained -example that you can put into a test module: - -.. sourcecode:: python - - # ./test_simplefactory.py - def pytest_funcarg__myfuncarg(request): - return 42 - - def test_function(myfuncarg): - assert myfuncarg == 17 - -If you run this with ``py.test test_simplefactory.py`` you see something like this: - -.. sourcecode:: python - - =========================== test session starts ============================ - python: platform linux2 -- Python 2.6.2 - test object 1: /home/hpk/hg/py/trunk/example/funcarg/test_simplefactory.py - - test_simplefactory.py F - - ================================ FAILURES ================================== - ______________________________ test_function _______________________________ - - myfuncarg = 42 - - def test_function(myfuncarg): - > assert myfuncarg == 17 - E assert 42 == 17 - - test_simplefactory.py:6: AssertionError - ======================== 1 failed in 0.11 seconds ========================== - - -This means that the test function got executed and the assertion failed. -Here is how py.test comes to execute this test function: - -1. py.test discovers the ``test_function`` because of the ``test_`` prefix. - The test function needs a function argument named ``myfuncarg``. - A matching factory function is discovered by looking for the special - name ``pytest_funcarg__myfuncarg``. - -2. ``pytest_funcarg__myfuncarg(request)`` is called and - returns the value for ``myfuncarg``. - -3. ``test_function(42)`` call is executed. - -Note that if you misspell a function argument or want -to use one that isn't available, an error with a list of -available function argument is provided. - -For more interesting factory functions that make good use of the -`request object`_ please see the `application setup tutorial example`_. - -.. _`request object`: - -funcarg factory request objects ------------------------------------------- - -Request objects are passed to funcarg factories and allow -to access test configuration, test context and `useful caching -and finalization helpers`_. Here is a list of attributes: - -``request.function``: python function object requesting the argument - -``request.cls``: class object where the test function is defined in or None. - -``request.module``: module object where the test function is defined in. - -``request.config``: access to command line opts and general config - -``request.param``: if exists was passed by a previous `metafunc.addcall`_ - -.. _`useful caching and finalization helpers`: - - -registering funcarg related finalizers/cleanup ----------------------------------------------------- - -.. sourcecode:: python - - def addfinalizer(func): - """ call a finalizer function when test function finishes. """ - -Calling ``request.addfinalizer()`` is useful for scheduling teardown -functions. Here is an example for providing a ``myfile`` -object that is to be closed when the execution of a -test function finishes. - -.. sourcecode:: python - - def pytest_funcarg__myfile(self, request): - # ... create and open a unique per-function "myfile" object ... - request.addfinalizer(lambda: myfile.close()) - return myfile - - -managing fixtures across test modules and test runs ----------------------------------------------------------- - -.. sourcecode:: python - - def cached_setup(setup, teardown=None, scope="module", extrakey=None): - """ cache and return result of calling setup(). - - The requested argument name, the scope and the ``extrakey`` - determine the cache key. The scope also determines when - teardown(result) will be called. valid scopes are: - scope == 'function': when the single test function run finishes. - scope == 'module': when tests in a different module are run - scope == 'session': when tests of the session have run. - """ - -Calling ``request.cached_setup()`` helps you to manage fixture -objects across several scopes. For example, for creating a Database object -that is to be setup only once during a test session you can use the helper -like this: - -.. sourcecode:: python - - def pytest_funcarg__database(request): - return request.cached_setup( - setup=lambda: Database("..."), - teardown=lambda val: val.close(), - scope="session" - ) - - -requesting values of other funcargs ---------------------------------------------- - -.. sourcecode:: python - - def getfuncargvalue(name): - """ Lookup and call function argument factory for the given name. - Each function argument is only created once per function setup. - """ - -``request.getfuncargvalue(name)`` calls another funcarg factory function. -You can use this function if you want to `decorate a funcarg`_, i.e. -you want to provide the "normal" value but add something -extra. If a factory cannot be found a ``request.Error`` -exception will be raised. - -.. _`test generators`: -.. _`parametrizing-tests`: - -generating parametrized tests -=========================================================== - -You can parametrize multiple runs of the same test -function by adding new test function calls with different -function argument values. Let's look at a simple self-contained -example: - -.. sourcecode:: python - - # ./test_example.py - def pytest_generate_tests(metafunc): - if "numiter" in metafunc.funcargnames: - for i in range(10): - metafunc.addcall(funcargs=dict(numiter=i)) - - def test_func(numiter): - assert numiter < 9 - -If you run this with ``py.test test_example.py`` you'll get: - -.. sourcecode:: python - - ============================= test session starts ========================== - python: platform linux2 -- Python 2.6.2 - test object 1: /home/hpk/hg/py/trunk/test_example.py - - test_example.py .........F - - ================================ FAILURES ================================== - __________________________ test_func.test_func[9] __________________________ - - numiter = 9 - - def test_func(numiter): - > assert numiter < 9 - E assert 9 < 9 - - /home/hpk/hg/py/trunk/test_example.py:10: AssertionError - - -Here is what happens in detail: - -1. ``pytest_generate_tests(metafunc)`` hook is called once for each test - function. It adds ten new function calls with explicit function arguments. - -2. **execute tests**: ``test_func(numiter)`` is called ten times with - ten different arguments. - -.. _`metafunc object`: - -test generators and metafunc objects -------------------------------------------- - -metafunc objects are passed to the ``pytest_generate_tests`` hook. -They help to inspect a testfunction and to generate tests -according to test configuration or values specified -in the class or module where a test function is defined: - -``metafunc.funcargnames``: set of required function arguments for given function - -``metafunc.function``: underlying python test function - -``metafunc.cls``: class object where the test function is defined in or None. - -``metafunc.module``: the module object where the test function is defined in. - -``metafunc.config``: access to command line opts and general config - - -.. _`metafunc.addcall`: - -the ``metafunc.addcall()`` method ------------------------------------------------ - -.. sourcecode:: python - - def addcall(funcargs={}, id=None, param=None): - """ trigger a new test function call. """ - -``funcargs`` can be a dictionary of argument names -mapped to values - providing it is called *direct parametrization*. - -If you provide an `id`` it will be used for reporting -and identification purposes. If you don't supply an `id` -the stringified counter of the list of added calls will be used. -``id`` values needs to be unique between all -invocations for a given test function. - -``param`` if specified will be seen by any -`funcarg factory`_ as a ``request.param`` attribute. -Setting it is called *indirect parametrization*. - -Indirect parametrization is preferable if test values are -expensive to setup or can only be created in certain environments. -Test generators and thus ``addcall()`` invocations are performed -during test collection which is separate from the actual test -setup and test run phase. With distributed testing collection -and test setup/run happens in different process. - - - Binary file contrib/pytest_coverage/header_bg.jpg has changed --- a/doc/test/features.txt +++ b/doc/test/features.txt @@ -26,7 +26,7 @@ naming patterns. As ``py.test`` operate cmdline tool you can easily have a command line utility and some tests in the same file. -supports many testing practises and methods +supports several testing practises and methods ================================================================== py.test supports many testing methods conventionally used in --- a/py/plugin/pytest_default.py +++ b/py/plugin/pytest_default.py @@ -70,7 +70,7 @@ def pytest_addoption(parser): add_dist_options(parser) else: parser.epilog = ( - "execnet missing: --looponfailing and distributed testing not available.") + "'execnet' package required for --looponfailing / distributed testing.") def add_dist_options(parser): # see http://pytest.org/help/dist") --- /dev/null +++ b/doc/announce/release-1.1.0.txt @@ -0,0 +1,115 @@ +py.test/pylib 1.1.0: Python3, Jython, advanced skipping, cleanups ... +-------------------------------------------------------------------------------- + +Features: + +* compatible to Python3 (single py2/py3 source), works with Distribute +* generalized marking_: mark tests one a whole-class or whole-module basis +* conditional skipping_: skip/xfail based on platform/dependencies + +Fixes: + +* code reduction and "de-magification" (e.g. 23 KLoc -> 11 KLOC) +* distribute testing requires the now separately released 'execnet' package +* funcarg-setup/caching, "same-name" test modules now cause an exlicit error +* de-cluttered reporting, --report option for skipped/xfail details + +Compatibilities + +1.1.0 should allow running test code that already worked well with 1.0.2 +plus some more due to improved unittest/nose compatibility. + +More information: + + http://pytest.org + +thanks and have fun, + +holger (http://twitter.com/hpk42) + +.. _marking: ../test/plugin/mark.html +.. _skipping: ../test/plugin/skipping.html + + +Changelog 1.0.2 -> 1.1.0 +----------------------------------------------------------------------- + +* remove py.rest tool and internal namespace - it was + never really advertised and can still be used with + the old release if needed. If there is interest + it could be revived into its own tool i guess. + +* fix issue48 and issue59: raise an Error if the module + from an imported test file does not seem to come from + the filepath - avoids "same-name" confusion that has + been reported repeatedly + +* merged Ronny's nose-compatibility hacks: now + nose-style setup_module() and setup() functions are + supported + +* introduce generalized py.test.mark function marking + +* reshuffle / refine command line grouping + +* deprecate parser.addgroup in favour of getgroup which creates option group + +* add --report command line option that allows to control showing of skipped/xfailed sections + +* generalized skipping: a new way to mark python functions with skipif or xfail + at function, class and modules level based on platform or sys-module attributes. + +* extend py.test.mark decorator to allow for positional args + +* introduce and test "py.cleanup -d" to remove empty directories + +* fix issue #59 - robustify unittest test collection + +* make bpython/help interaction work by adding an __all__ attribute + to ApiModule, cleanup initpkg + +* use MIT license for pylib, add some contributors + +* remove py.execnet code and substitute all usages with 'execnet' proper + +* fix issue50 - cached_setup now caches more to expectations + for test functions with multiple arguments. + +* merge Jarko's fixes, issue #45 and #46 + +* add the ability to specify a path for py.lookup to search in + +* fix a funcarg cached_setup bug probably only occuring + in distributed testing and "module" scope with teardown. + +* many fixes and changes for making the code base python3 compatible, + many thanks to Benjamin Peterson for helping with this. + +* consolidate builtins implementation to be compatible with >=2.3, + add helpers to ease keeping 2 and 3k compatible code + +* deprecate py.compat.doctest|subprocess|textwrap|optparse + +* deprecate py.magic.autopath, remove py/magic directory + +* move pytest assertion handling to py/code and a pytest_assertion + plugin, add "--no-assert" option, deprecate py.magic namespaces + in favour of (less) py.code ones. + +* consolidate and cleanup py/code classes and files + +* cleanup py/misc, move tests to bin-for-dist + +* introduce delattr/delitem/delenv methods to py.test's monkeypatch funcarg + +* consolidate py.log implementation, remove old approach. + +* introduce py.io.TextIO and py.io.BytesIO for distinguishing between + text/unicode and byte-streams (uses underlying standard lib io.* + if available) + +* make py.unittest_convert helper script available which converts "unittest.py" + style files into the simpler assert/direct-test-classes py.test/nosetests + style. The script was written by Laura Creighton. + +* simplified internal localpath implementation --- a/testing/log/test_log.py +++ b/testing/log/test_log.py @@ -118,7 +118,7 @@ class TestLogConsumer: def test_log_file(self): customlog = tempdir.join('log.out') - py.log.setconsumer("default", open(str(customlog), 'w', buffering=0)) + py.log.setconsumer("default", open(str(customlog), 'w', buffering=1)) py.log.Producer("default")("hello world #1") assert customlog.readlines() == ['[default] hello world #1\n'] --- a/contrib/pytest_coverage/__init__.py +++ /dev/null @@ -1,329 +0,0 @@ -""" -Tested with coverage 2.85 and pygments 1.0 - -TODO: - + 'html-output/*,cover' should be deleted - + credits for coverage - + credits for pygments - + 'Install pygments' after ImportError is to less - + is the way of determining DIR_CSS_RESOURCE ok? - + write plugin test - + '.coverage' still exists in py.test execution dir -""" - -import os -import sys -import re -import shutil -from StringIO import StringIO - -import py - -try: - from pygments import highlight - from pygments.lexers import get_lexer_by_name - from pygments.formatters import HtmlFormatter -except ImportError: - print "Install pygments" # XXX - sys.exit(0) - - -DIR_CUR = str(py.path.local()) -REPORT_FILE = os.path.join(DIR_CUR, '.coverage') -DIR_ANNOTATE_OUTPUT = os.path.join(DIR_CUR, '.coverage_annotate') -COVERAGE_MODULES = set() -# coverage output parsing -REG_COVERAGE_SUMMARY = re.compile('([a-z_\.]+) +([0-9]+) +([0-9]+) +([0-9]+%)') -REG_COVERAGE_SUMMARY_TOTAL = re.compile('(TOTAL) +([0-9]+) +([0-9]+) +([0-9]+%)') -DEFAULT_COVERAGE_OUTPUT = '.coverage_annotation' -# HTML output specific -DIR_CSS_RESOURCE = os.path.dirname(__import__('pytest_coverage').__file__) -CSS_RESOURCE_FILES = ['header_bg.jpg', 'links.gif'] - -COVERAGE_TERM_HEADER = "\nCOVERAGE INFORMATION\n" \ - "====================\n" -HTML_INDEX_HEADER = ''' - - - py.test - Coverage Index - - - - - - Module Coverage - - - - Module - Statements - Executed - Coverage - - ''' -HTML_INDEX_FOOTER = ''' - - - ''' - - -class CoverageHtmlFormatter(HtmlFormatter): - """XXX: doc""" - - def __init__(self, *args, **kwargs): - HtmlFormatter.__init__(self,*args, **kwargs) - self.annotation_infos = kwargs.get('annotation_infos') - - def _highlight_lines(self, tokensource): - """ - XXX: doc - """ - - hls = self.hl_lines - self.annotation_infos = [None] + self.annotation_infos - hls = [l for l, i in enumerate(self.annotation_infos) if i] - for i, (t, value) in enumerate(tokensource): - if t != 1: - yield t, value - if i + 1 in hls: # i + 1 because Python indexes start at 0 - if self.annotation_infos[i+1] == "!": - yield 1, '%s' \ - % value - elif self.annotation_infos[i+1] == ">": - yield 1, '%s' \ - % value - else: - raise ValueError("HHAHA: %s" % self.annotation_infos[i+1]) - else: - yield 1, value - - -def _rename_annotation_files(module_list, dir_annotate_output): - for m in module_list: - mod_fpath = os.path.basename(m.__file__) - if mod_fpath.endswith('pyc'): - mod_fpath = mod_fpath[:-1] - old = os.path.join(dir_annotate_output, '%s,cover'% mod_fpath) - new = os.path.join(dir_annotate_output, '%s,cover'% m.__name__) - if os.path.isfile(old): - shutil.move(old, new) - yield new - -def _generate_module_coverage(mc_path, anotation_infos, src_lines): - #XXX: doc - - code = "".join(src_lines) - mc_path = "%s.html" % mc_path - lexer = get_lexer_by_name("python", stripall=True) - formatter = CoverageHtmlFormatter(linenos=True, noclasses=True, - hl_lines=[1], annotation_infos=anotation_infos) - result = highlight(code, lexer, formatter) - fp = open(mc_path, 'w') - fp.write(result) - fp.close() - -def _parse_modulecoverage(mc_fpath): - #XXX: doc - - fd = open(mc_fpath, 'r') - anotate_infos = [] - src_lines = [] - for line in fd.readlines(): - anotate_info = line[0:2].strip() - if not anotate_info: - anotate_info = None - src_line = line[2:] - anotate_infos.append(anotate_info) - src_lines.append(src_line) - return mc_fpath, anotate_infos, src_lines - -def _parse_coverage_summary(fd): - """Parses coverage summary output.""" - - if hasattr(fd, 'readlines'): - fd.seek(0) - for l in fd.readlines(): - m = REG_COVERAGE_SUMMARY.match(l) - if m: - # yield name, stmts, execs, cover - yield m.group(1), m.group(2), m.group(3), m.group(4) - else: - m = REG_COVERAGE_SUMMARY_TOTAL.match(l) - if m: - # yield name, stmts, execs, cover - yield m.group(1), m.group(2), m.group(3), m.group(4) - - -def _get_coverage_index(mod_name, stmts, execs, cover, annotation_dir): - """ - Generates the index page where are all modulare coverage reports are - linked. - """ - - if mod_name == 'TOTAL': - return '%s%s%s%s\n' % (mod_name, stmts, execs, cover) - covrep_fpath = os.path.join(annotation_dir, '%s,cover.html' % mod_name) - assert os.path.isfile(covrep_fpath) == True - fname = os.path.basename(covrep_fpath) - modlink = '%s' % (fname, mod_name) - return '%s%s%s%s\n' % (modlink, stmts, execs, cover) - - -class CoveragePlugin: - def pytest_addoption(self, parser): - group = parser.addgroup('coverage options') - group.addoption('-C', action='store_true', default=False, - dest = 'coverage', - help=('displays coverage information.')) - group.addoption('--coverage-html', action='store', default=False, - dest='coverage_annotation', - help='path to the coverage HTML output dir.') - group.addoption('--coverage-css-resourcesdir', action='store', - default=DIR_CSS_RESOURCE, - dest='coverage_css_ressourcedir', - help='path to dir with css-resources (%s) for ' - 'being copied to the HTML output dir.' % \ - ", ".join(CSS_RESOURCE_FILES)) - - def pytest_configure(self, config): - if config.getvalue('coverage'): - try: - import coverage - except ImportError: - raise config.Error("To run use the coverage option you have to install " \ - "Ned Batchelder's coverage: "\ - "http://nedbatchelder.com/code/modules/coverage.html") - self.coverage = coverage - self.summary = None - - def pytest_terminal_summary(self, terminalreporter): - if hasattr(self, 'coverage'): - self.coverage.stop() - module_list = [sys.modules[mod] for mod in COVERAGE_MODULES] - module_list.sort() - summary_fd = StringIO() - # get coverage reports by module list - self.coverage.report(module_list, file=summary_fd) - summary = COVERAGE_TERM_HEADER + summary_fd.getvalue() - terminalreporter._tw.write(summary) - - config = terminalreporter.config - dir_annotate_output = config.getvalue('coverage_annotation') - if dir_annotate_output: - if dir_annotate_output == "": - dir_annotate_output = DIR_ANNOTATE_OUTPUT - # create dir - if os.path.isdir(dir_annotate_output): - shutil.rmtree(dir_annotate_output) - os.mkdir(dir_annotate_output) - # generate annotation text files for later parsing - self.coverage.annotate(module_list, dir_annotate_output) - # generate the separate module coverage reports - for mc_fpath in _rename_annotation_files(module_list, \ - dir_annotate_output): - # mc_fpath, anotate_infos, src_lines from _parse_do - _generate_module_coverage(*_parse_modulecoverage(mc_fpath)) - # creating contents for the index pagee for coverage report - idxpage_html = StringIO() - idxpage_html.write(HTML_INDEX_HEADER) - total_sum = None - for args in _parse_coverage_summary(summary_fd): - # mod_name, stmts, execs, cover = args - idxpage_html.write(_get_coverage_index(*args, \ - **dict(annotation_dir=dir_annotate_output))) - idxpage_html.write(HTML_INDEX_FOOTER) - idx_fpath = os.path.join(dir_annotate_output, 'index.html') - idx_fd = open(idx_fpath, 'w') - idx_fd.write(idxpage_html.getvalue()) - idx_fd.close() - - dir_css_resource_dir = config.getvalue('coverage_css_ressourcedir') - if dir_annotate_output and dir_css_resource_dir != "": - if not os.path.isdir(dir_css_resource_dir): - raise config.Error("CSS resource dir not found: '%s'" % \ - dir_css_resource_dir) - for r in CSS_RESOURCE_FILES: - src = os.path.join(dir_css_resource_dir, r) - if os.path.isfile(src): - dest = os.path.join(dir_annotate_output, r) - shutil.copy(src, dest) - - def pytest_collectstart(self, collector): - if isinstance(collector, py.__.test.pycollect.Module): - COVERAGE_MODULES.update(getattr(collector.obj, - 'COVERAGE_MODULES', [])) - - def pytest_testrunstart(self): - print "self.coverage", self.coverage - if hasattr(self, 'coverage'): - print "START coverage" - self.coverage.erase() - self.coverage.start() - - --- a/py/plugin/pytest_terminal.py +++ b/py/plugin/pytest_terminal.py @@ -259,8 +259,8 @@ class TerminalReporter: verinfo = ".".join(map(str, sys.version_info[:3])) msg = "python: platform %s -- Python %s" % (sys.platform, verinfo) + msg += " -- pytest-%s" % (py.__version__) if self.config.option.verbose or self.config.option.debug or getattr(self.config.option, 'pastebin', None): - msg += " -- pytest-%s" % (py.__version__) msg += " -- " + str(sys.executable) self.write_line(msg) --- a/doc/test/plugin/links.txt +++ b/doc/test/plugin/links.txt @@ -1,38 +1,42 @@ .. _`helpconfig`: helpconfig.html .. _`terminal`: terminal.html -.. _`pytest_recwarn.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_recwarn.py +.. _`pytest_recwarn.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.1.0/py/plugin/pytest_recwarn.py .. _`unittest`: unittest.html -.. _`pytest_monkeypatch.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_monkeypatch.py +.. _`pytest_monkeypatch.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.1.0/py/plugin/pytest_monkeypatch.py .. _`pastebin`: pastebin.html .. _`skipping`: skipping.html .. _`plugins`: index.html -.. _`pytest_doctest.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_doctest.py +.. _`mark`: mark.html +.. _`tmpdir`: tmpdir.html +.. _`pytest_doctest.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.1.0/py/plugin/pytest_doctest.py .. _`capture`: capture.html -.. _`pytest_nose.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_nose.py -.. _`pytest_restdoc.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_restdoc.py +.. _`pytest_nose.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.1.0/py/plugin/pytest_nose.py +.. _`pytest_restdoc.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.1.0/py/plugin/pytest_restdoc.py .. _`restdoc`: restdoc.html -.. _`pytest_pastebin.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_pastebin.py -.. _`mark`: mark.html -.. _`pytest_figleaf.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_figleaf.py -.. _`pytest_hooklog.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_hooklog.py -.. _`pytest_skipping.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_skipping.py +.. _`pytest_pastebin.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.1.0/py/plugin/pytest_pastebin.py +.. _`pytest_tmpdir.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.1.0/py/plugin/pytest_tmpdir.py +.. _`pytest_figleaf.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.1.0/py/plugin/pytest_figleaf.py +.. _`pytest_hooklog.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.1.0/py/plugin/pytest_hooklog.py +.. _`pytest_skipping.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.1.0/py/plugin/pytest_skipping.py .. _`checkout the py.test development version`: ../../install.html#checkout -.. _`pytest_helpconfig.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_helpconfig.py +.. _`pytest_helpconfig.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.1.0/py/plugin/pytest_helpconfig.py .. _`oejskit`: oejskit.html .. _`doctest`: doctest.html -.. _`pytest_mark.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_mark.py +.. _`pytest_mark.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.1.0/py/plugin/pytest_mark.py .. _`get in contact`: ../../contact.html -.. _`pytest_capture.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_capture.py +.. _`pytest_capture.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.1.0/py/plugin/pytest_capture.py .. _`figleaf`: figleaf.html .. _`customize`: ../customize.html .. _`hooklog`: hooklog.html -.. _`pytest_terminal.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_terminal.py +.. _`pytest_terminal.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.1.0/py/plugin/pytest_terminal.py .. _`recwarn`: recwarn.html -.. _`pytest_pdb.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_pdb.py +.. _`pytest_pdb.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.1.0/py/plugin/pytest_pdb.py .. _`monkeypatch`: monkeypatch.html +.. _`coverage`: coverage.html .. _`resultlog`: resultlog.html .. _`django`: django.html -.. _`pytest_unittest.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_unittest.py +.. _`xmlresult`: xmlresult.html +.. _`pytest_unittest.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.1.0/py/plugin/pytest_unittest.py .. _`nose`: nose.html -.. _`pytest_resultlog.py`: http://bitbucket.org/hpk42/py-trunk/raw/trunk/_py/test/plugin/pytest_resultlog.py +.. _`pytest_resultlog.py`: http://bitbucket.org/hpk42/py-trunk/raw/1.1.0/py/plugin/pytest_resultlog.py .. _`pdb`: pdb.html --- a/setup.py +++ b/setup.py @@ -28,7 +28,7 @@ def main(): name='py', description='py.test and pylib: rapid testing and development utils.', long_description = long_description, - version= trunk or '1.1.0b1', + version= trunk or '1.1.0', url='http://pylib.org', license='MIT license', platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'], @@ -42,7 +42,7 @@ def main(): 'py.svnwcrevert = py.cmdline:pysvnwcrevert', 'py.test = py.cmdline:pytest', 'py.which = py.cmdline:pywhich']}, - classifiers=['Development Status :: 4 - Beta', + classifiers=['Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: MIT License', 'Operating System :: POSIX', @@ -50,7 +50,6 @@ def main(): 'Operating System :: MacOS :: MacOS X', 'Topic :: Software Development :: Testing', 'Topic :: Software Development :: Libraries', - 'Topic :: System :: Distributed Computing', 'Topic :: Utilities', 'Programming Language :: Python'], packages=['py', --- a/doc/install.txt +++ b/doc/install.txt @@ -25,7 +25,7 @@ on Windows you might need to write down The py lib and its tools are expected to work well on Linux, Windows and OSX, Python versions 2.4, 2.5, 2.6 through to -the Python3 versions 3.0 and 3.1. Jython +the Python3 versions 3.0 and 3.1 and Jython .. _mercurial: http://mercurial.selenic.com/wiki/ .. _`Distribute`: @@ -43,15 +43,13 @@ and documentation source with mercurial_ hg clone https://bitbucket.org/hpk42/py-trunk/ -This currrently contains a 1.0.x branch and the -default 'trunk' branch where mainline development -takes place. +Development usually takes place on the 'trunk' branch. .. There also is a readonly subversion checkout available which contains the latest release:: svn co https://codespeak.net/svn/py/dist -You can go to the python package index and +You can also go to the python package index and download and unpack a TAR file:: http://pypi.python.org/pypi/py/ @@ -64,7 +62,7 @@ With a working `Distribute`_ or setuptoo python setup.py develop -in order to work with the tools and the lib of your checkout. +in order to work inline with the tools and the lib of your checkout. .. _`no-setuptools`: --- /dev/null +++ b/doc/test/plugin/tmpdir.txt @@ -0,0 +1,39 @@ + +pytest_tmpdir plugin +==================== + +provide temporary directories to test functions. + +.. contents:: + :local: + +usage example:: + + def test_plugin(tmpdir): + tmpdir.join("hello").write("hello") + +.. _`py.path.local`: ../../path.html + +.. _`tmpdir funcarg`: + + +the 'tmpdir' test function argument +----------------------------------- + +return a temporary directory path object +unique to each test function invocation, +created as a sub directory of the base temporary +directory. The returned object is a `py.path.local`_ +path object. + +Start improving this plugin in 30 seconds +========================================= + + +1. Download `pytest_tmpdir.py`_ plugin source code +2. put it somewhere as ``pytest_tmpdir.py`` into your import path +3. a subsequent ``py.test`` run will use your local version + +Checkout customize_, other plugins_ or `get in contact`_. + +.. include:: links.txt --- a/doc/faq.txt +++ b/doc/faq.txt @@ -13,75 +13,79 @@ On naming, nosetests, licensing and magi Why the ``py`` naming? what is it? ------------------------------------ -Because the name was kind of available and there was the +Because the name was available and there was the idea to have the package evolve into a "standard" library kind of thing that works cross-python versions and is not tied to a particular CPython revision or its release cycle. Clearly, this was ambitious and the naming has maybe haunted the project rather than helping it. -There may be a project name change and possibly a -split up into different projects sometime. Why the ``py.test`` naming? ------------------------------------ -the py lib contains other command line tools that -all share the ``py.`` prefix which makes it easy -to use TAB-completion on the shell. Another motivation -was to make it obvious where testing functionality -for the ``py.test`` command line tool is: in the -``py.test`` package name space. +because of TAB-completion under Bash/Shells. If you hit +``py.`` you'll get a list of available development +tools that all share the ``py.`` prefix. Another motivation +was to unify the package ("py.test") and tool filename. What's py.test's relation to ``nosetests``? --------------------------------------------- py.test and nose_ share basic philosophy when it comes to running Python tests. In fact, -with py.test-1.0.1 it is easy to run many test suites +with py.test-1.1.0 it is ever easier to run many test suites that currently work with ``nosetests``. nose_ was created -as a clone of ``py.test`` when it was in the ``0.8`` release +as a clone of ``py.test`` when py.test was in the ``0.8`` release cycle so some of the newer features_ introduced with py.test-1.0 -have no counterpart in nose_. +and py.test-1.1 have no counterpart in nose_. .. _nose: http://somethingaboutorange.com/mrl/projects/nose/0.11.1/ .. _features: test/features.html +.. _apipkg: http://pypi.python.org/pypi/apipkg -What's all this "magic" with py.test? + +What's this "magic" with py.test? ---------------------------------------- -"All this magic" usually boils down to two issues: +issues where people have used the term "magic" in the past: -* There is a special tweak to importing: `py/__init__.py`_ contains - a dictionary which maps the importable ``py.*`` namespaces to - objects in files. When looking at the project source code - you see imports like ``from py.__.test.session import Session``. The - the double ``__`` underscore indicates the "normal" python - filesystem/namespace coupled import, i.e. it points to - ``py/test/session.py``'s ``Session`` object. However, - from the outside you use the "non-underscore" `py namespaces`_ - so this distinction usually only shows up if you hack - on internal code or see internal tracebacks. +* `py/__init__.py`_ uses the apipkg_ mechanism for lazy-importing + and full control on what API you get when importing "import py". -* when an ``assert`` fails, py.test re-interprets the expression - to show intermediate values. This allows to use the plain ``assert`` - statement instead of the many methods that you otherwise need - to mimick this behaviour. This means that in case of a failing - assert, your expressions gets evaluated *twice*. If your expression - has side effects the outcome may be different. If the test suddenly - passes you will get a detailed message. It is good practise, anyway, - to not have asserts with side effects. ``py.test --nomagic`` turns - off assert re-intepretation. +* when an ``assert`` statement fails, py.test re-interprets the expression + to show intermediate values if a test fails. If your expression + has side effects the intermediate values may not be the same, obfuscating + the initial error (this is also explained at the command line if it happens). + ``py.test --no-assert`` turns off assert re-intepretation. + Sidenote: it is good practise to avoid asserts with side effects. -Other than that, ``py.test`` has bugs or quirks like any other computer -software. In fact, it has a *strong* focus on running robustly and has -over a thousand automated tests for its own code base. .. _`py namespaces`: index.html -.. _`py/__init__.py`: http://bitbucket.org/hpk42/py-trunk/src/1.0.x/py/__init__.py +.. _`py/__init__.py`: http://bitbucket.org/hpk42/py-trunk/src/trunk/py/__init__.py -function arguments and parametrized tests -=============================================== +function arguments, parametrized tests and setup +==================================================== + +.. _funcargs: test/funcargs.html + +Is using funcarg- versus xUnit-based setup a style question? +--------------------------------------------------------------- + +It depends. For simple applications or for people experienced +with nose_ or unittest-style test setup using `xUnit style setup`_ +make some sense. For larger test suites, parametrized testing +or setup of complex test resources using funcargs_ is recommended. +Moreover, funcargs are ideal for writing advanced test support +code (like e.g. the monkeypatch_, the tmpdir_ or capture_ funcargs) +because the support code can register setup/teardown functions +in a managed class/module/function scope. + +.. _monkeypatch: test/plugin/monkeypatch.html +.. _tmpdir: test/plugin/tmpdir.html +.. _capture: test/plugin/capture.html +.. _`xUnit style setup`: test/xunit_setup.html +.. _`pytest_nose`: test/plugin/nose.html .. _`why pytest_pyfuncarg__ methods?`: @@ -94,7 +98,7 @@ flexibility we decided to go for `Conven allow to directly specify the factory. Besides removing the need for an indirection it allows to "grep" for ``pytest_funcarg__MYARG`` and will safely find all factory functions for the ``MYARG`` function -argument. It helps to alleviates the de-coupling of function +argument. It helps to alleviate the de-coupling of function argument usage and creation. .. _`Convention over Configuration`: http://en.wikipedia.org/wiki/Convention_over_Configuration --- a/doc/code.txt +++ b/doc/code.txt @@ -18,7 +18,7 @@ Contents of the library Every object in the ``py.code`` library wraps a code Python object related to code objects, source code, frames and tracebacks: the ``py.code.Code`` class wraps code objects, ``py.code.Source`` source snippets, -``py.code.Traceback` exception tracebacks, :api:`py.code.Frame`` frame +``py.code.Traceback` exception tracebacks, ``py.code.Frame`` frame objects (as found in e.g. tracebacks) and ``py.code.ExceptionInfo`` the tuple provided by sys.exc_info() (containing exception and traceback information when an exception occurs). Also in the library is a helper function Binary file contrib/pytest_coverage/links.gif has changed --- a/doc/test/plugin/index.txt +++ b/doc/test/plugin/index.txt @@ -1,63 +1,63 @@ -plugins for Python test functions -================================= +advanced python testing +======================= skipping_ advanced skipping for python test functions, classes or modules. +mark_ generic mechanism for marking python functions. + +pdb_ interactive debugging with the Python Debugger. + figleaf_ write and report coverage data with 'figleaf'. +coverage_ (3rd) for testing with Ned's coverage module + monkeypatch_ safely patch object attributes, dicts and environment variables. capture_ configurable per-test stdout/stderr capturing mechanisms. recwarn_ helpers for asserting deprecation and other warnings. +tmpdir_ provide temporary directories to test functions. -plugins for other testing styles and languages -============================================== -oejskit_ run javascript tests in real life browsers +testing domains +=============== + +oejskit_ (3rd) run javascript tests in real life browsers + +django_ (3rd) for testing django applications + + +reporting and failure logging +============================= + +pastebin_ submit failure or test session information to a pastebin service. + +xmlresult_ (3rd) for generating xml reports and CruiseControl integration + +resultlog_ resultlog plugin for machine-readable logging of test results. + +terminal_ Implements terminal reporting of the full testing process. + + +other testing conventions +========================= unittest_ automatically discover and run traditional "unittest.py" style tests. nose_ nose-compatibility plugin: allow to run nose test suites natively. -django_ support for testing django applications - doctest_ collect and execute doctests from modules and test files. restdoc_ perform ReST syntax, local and remote reference tests on .rst/.txt files. -plugins for generic reporting and failure logging -================================================= - -pastebin_ submit failure or test session information to a pastebin service. - -resultlog_ resultlog plugin for machine-readable logging of test results. - -terminal_ Implements terminal reporting of the full testing process. - - -plugins for generic reporting and failure logging -================================================= - -pastebin_ submit failure or test session information to a pastebin service. - -resultlog_ resultlog plugin for machine-readable logging of test results. - -terminal_ Implements terminal reporting of the full testing process. - - -misc plugins / core functionality -================================= +core debugging / help functionality +=================================== helpconfig_ provide version info, conftest/environment config names. -pdb_ interactive debugging with the Python Debugger. - -mark_ generic mechanism for marking python functions. - hooklog_ log invocations of extension hooks to a file. --- /dev/null +++ b/doc/test/plugin/coverage.txt @@ -0,0 +1,10 @@ +pytest_xmlresult plugin (EXTERNAL) +========================================== + +This plugin allows to write results in an XML format +compatible to CruiseControl_, see here for download: + + http://github.com/rozza/py.test-plugins + +.. _CruiseControl: http://cruisecontrol.sourceforge.net/ + --- a/py/plugin/pytest_tmpdir.py +++ b/py/plugin/pytest_tmpdir.py @@ -1,16 +1,21 @@ -""" - provide temporary directories to test functions and methods. +"""provide temporary directories to test functions. -example: - - pytest_plugins = "pytest_tmpdir" +usage example:: def test_plugin(tmpdir): tmpdir.join("hello").write("hello") +.. _`py.path.local`: ../../path.html + """ import py def pytest_funcarg__tmpdir(request): + """return a temporary directory path object + unique to each test function invocation, + created as a sub directory of the base temporary + directory. The returned object is a `py.path.local`_ + path object. + """ name = request.function.__name__ return request.config.mktemp(name, numbered=True) --- a/doc/announce/releases.txt +++ b/doc/announce/releases.txt @@ -5,10 +5,12 @@ Release notes Contents: .. toctree:: - :maxdepth: 1 + :maxdepth: 2 - announce/release-1.0.2 - announce/release-1.0.1 - announce/release-1.0.0 - announce/release-0.9.2 - announce/release-0.9.0 +.. include: release-1.1.0 +.. include: release-1.0.2 + + release-1.0.1 + release-1.0.0 + release-0.9.2 + release-0.9.0 --- a/doc/changelog.txt +++ b/doc/changelog.txt @@ -1,6 +1,8 @@ -Changes between 1.0.2 and '1.1.0b1' +Changes between 1.1.0 and 1.0.2 ===================================== +* adjust and improve docs + * remove py.rest tool and internal namespace - it was never really advertised and can still be used with the old release if needed. If there is interest @@ -49,6 +51,9 @@ Changes between 1.0.2 and '1.1.0b1' * fix a funcarg cached_setup bug probably only occuring in distributed testing and "module" scope with teardown. +* many fixes and changes for making the code base python3 compatible, + many thanks to Benjamin Peterson for helping with this. + * consolidate builtins implementation to be compatible with >=2.3, add helpers to ease keeping 2 and 3k compatible code --- a/bin-for-dist/makepluginlist.py +++ b/bin-for-dist/makepluginlist.py @@ -3,28 +3,29 @@ import os, sys WIDTH = 75 plugins = [ - ('plugins for Python test functions', - 'skipping figleaf monkeypatch capture recwarn',), - ('plugins for other testing styles and languages', - 'oejskit unittest nose django doctest restdoc'), - ('plugins for generic reporting and failure logging', - 'pastebin resultlog terminal',), - ('plugins for generic reporting and failure logging', - 'pastebin resultlog terminal',), - ('misc plugins / core functionality', - 'helpconfig pdb mark hooklog') + ('advanced python testing', + 'skipping mark pdb figleaf coverage ' + 'monkeypatch capture recwarn tmpdir',), + ('testing domains', + 'oejskit django'), + ('reporting and failure logging', + 'pastebin xmlresult resultlog terminal',), + ('other testing conventions', + 'unittest nose doctest restdoc'), + ('core debugging / help functionality', + 'helpconfig hooklog') #('internal plugins / core functionality', - # #'pdb keyword hooklog runner execnetcleanup # pytester', - # 'pdb keyword hooklog runner execnetcleanup' # pytester', + # #'runner execnetcleanup # pytester', + # 'runner execnetcleanup' # pytester', #) ] externals = { - 'oejskit': "run javascript tests in real life browsers", - 'django': "support for testing django applications", -# 'coverage': "support for using Ned's coverage module", -# 'xmlresult': "support for generating xml reports " -# "and CruiseControl integration", + 'oejskit': "run javascript tests in real life browsers", + 'django': "for testing django applications", + 'coverage': "for testing with Ned's coverage module ", + 'xmlresult': "for generating xml reports " + "and CruiseControl integration", } def warn(*args): @@ -136,7 +137,7 @@ class PluginOverview(RestWriter): docpath = self.target.dirpath(name).new(ext=".txt") if oneliner is not None: htmlpath = docpath.new(ext='.html') - self.para("%s_ %s" %(name, oneliner)) + self.para("%s_ (3rd) %s" %(name, oneliner)) self.add_internal_link(name, htmlpath) else: doc = PluginDoc(docpath) @@ -212,7 +213,7 @@ class PluginDoc(RestWriter): # "py/test/plugin/%s" %(hg_changeset, basename))) self.links.append((basename, "http://bitbucket.org/hpk42/py-trunk/raw/%s/" - "_py/test/plugin/%s" %(pyversion, basename))) + "py/plugin/%s" %(pyversion, basename))) self.links.append(('customize', '../customize.html')) self.links.append(('plugins', 'index.html')) self.links.append(('get in contact', '../../contact.html')) --- a/doc/confrest.py +++ b/doc/confrest.py @@ -1,6 +1,6 @@ import py -from _py.test.plugin.pytest_restdoc import convert_rest_html, strip_html_header +from py.plugin.pytest_restdoc import convert_rest_html, strip_html_header html = py.xml.html @@ -57,23 +57,23 @@ pageTracker._trackPageview(); def fill_menubar(self): items = [ - self.a_docref("install", "install.html"), - self.a_docref("contact", "contact.html"), - self.a_docref("changelog", "changelog.html"), - self.a_docref("faq", "faq.html"), + self.a_docref("INSTALL", "install.html"), + self.a_docref("CONTACT", "contact.html"), + self.a_docref("CHANGELOG", "changelog.html"), + self.a_docref("FAQ", "faq.html"), html.div( html.h3("py.test:"), - self.a_docref("doc index", "test/index.html"), - self.a_docref("features", "test/features.html"), - self.a_docref("quickstart", "test/quickstart.html"), - self.a_docref("tutorials", "test/talks.html"), - self.a_docref("plugins", "test/plugin/index.html"), - self.a_docref("funcargs", "test/funcargs.html"), - self.a_docref("customize", "test/customize.html"), + self.a_docref("Index", "test/index.html"), + self.a_docref("Quickstart", "test/quickstart.html"), + self.a_docref("Features", "test/features.html"), + self.a_docref("Plugins", "test/plugin/index.html"), + self.a_docref("Funcargs", "test/funcargs.html"), + self.a_docref("Customize", "test/customize.html"), + self.a_docref("Tutorials", "test/talks.html"), ), html.div( html.h3("supporting APIs:"), - self.a_docref("pylib index", "index.html"), + self.a_docref("Index", "index.html"), self.a_docref("py.path", "path.html"), self.a_docref("py.code", "code.html"), ) @@ -85,9 +85,10 @@ pageTracker._trackPageview(); self.menubar = html.div(id=css.menubar, *[ html.div(item) for item in items]) version = py.version + announcelink = self.a_docref("%s ANN" % version, + "announce/release-%s.html" %(version,)) self.menubar.insert(0, - html.div("%s" % (py.version), style="font-style: italic;") - ) + html.div(announcelink)) #self.a_href("%s-%s" % (self.title, py.version), # "http://pypi.python.org/pypi/py/%s" % version, #id="versioninfo", --- a/doc/test/quickstart.txt +++ b/doc/test/quickstart.txt @@ -7,7 +7,7 @@ Quickstart .. _here: ../install.html -If you have a version of ``easy_install`` (otherwise see here_) just type:: +If you have any ``easy_install`` (otherwise see here_) just type:: easy_install -U py --- a/py/__init__.py +++ b/py/__init__.py @@ -2,20 +2,14 @@ """ py.test and pylib: rapid testing and development utils -- `py.test`_: cross-project testing tool with many advanced features -- `py.path`_: path abstractions over local and subversion files -- `py.code`_: dynamic code compile and traceback printing support - -Compatibility: Linux, Win32, OSX, Python versions 2.4 through to 3.1. -For questions please check out http://pylib.org/contact.html - -.. _`py.test`: http://pylib.org/test.html -.. _`py.path`: http://pylib.org/path.html -.. _`py.code`: http://pylib.org/html +this module uses apipkg.py for lazy-loading sub modules +and classes. The initpkg-dictionary below specifies +name->value mappings where value can be another namespace +dictionary or an import path. (c) Holger Krekel and others, 2009 """ -version = "trunk" +version = "1.1.0" __version__ = version = version or "1.1.x" import py.apipkg --- a/py/plugin/pytest_skipping.py +++ b/py/plugin/pytest_skipping.py @@ -83,7 +83,7 @@ skipping on a missing import dependency -------------------------------------------------- You can use the following import helper at module level -or within a test or setup function. +or within a test or test setup function:: docutils = py.test.importorskip("docutils") --- a/doc/path.txt +++ b/doc/path.txt @@ -40,7 +40,7 @@ a ``py.path.local`` object for us (which >>> foofile.read(1) 'b' -``py.path.svnurl` and :api:`py.path.svnwc`` +``py.path.svnurl` and ``py.path.svnwc`` ---------------------------------------------- Two other ``py.path`` implementations that the py lib provides wrap the --- a/doc/test/dist.txt +++ b/doc/test/dist.txt @@ -14,6 +14,8 @@ specify different Python versions and in **Requirements**: you need to install the `execnet`_ package to perform distributed test runs. +**NOTE**: Version 1.1.0 is not able to distribute tests across Python3/Python2 barriers. + Speed up test runs by sending tests to multiple CPUs ---------------------------------------------------------- --- /dev/null +++ b/doc/test/plugin/xmlresult.txt @@ -0,0 +1,6 @@ +pytest_coverage plugin (EXTERNAL) +========================================== + +This plugin allows to use Ned's coverage package, see + + http://github.com/rozza/py.test-plugins --- a/doc/test/xunit_setup.txt +++ b/doc/test/xunit_setup.txt @@ -1,14 +1,20 @@ ==================================== -xUnit style setup +extended xUnit style setup ==================================== .. _`funcargs`: funcargs.html +.. _`test parametrization`: funcargs.html#parametrizing-tests +.. _`unittest plugin`: plugin/unittest.html .. _`xUnit`: http://en.wikipedia.org/wiki/XUnit Note: - Since version 1.0 funcargs_ present the recommended way - to manage flexible and scalable test setups. + Since version 1.0 funcargs_ present the new and + more powerful way to manage test setups with larger + test suites. *funcargs* also provide flexible + `test parametrization`_ which goes way beyond + what you can do with the xUnit setup/teardown-method + patter. Python, Java and many other languages have a tradition of using xUnit_ style testing. This typically @@ -19,6 +25,10 @@ scopes for which you can provide setup/t hooks to provide test fixtures: per-module, per-class and per-method/function. ``py.test`` will discover and call according methods automatically. + +The `unittest plugin`_ also will intregate ``unittest.TestCase`` +instances into a test run and call respective setup/teardown methods. + All setup/teardown methods are optional. The following methods are called at module level if they exist: --- a/py/impl/test/parseopt.py +++ b/py/impl/test/parseopt.py @@ -46,6 +46,7 @@ class Parser: self._groups.insert(i+1, group) return group + addgroup = getgroup def addgroup(self, name, description=""): py.log._apiwarn("1.1", "use getgroup() which gets-or-creates") return self.getgroup(name, description) From commits-noreply at bitbucket.org Thu Nov 5 20:11:48 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Thu, 5 Nov 2009 19:11:48 +0000 (UTC) Subject: [py-svn] py-trunk commit 686d625c298c: Added tag 1.1.0 for changeset 60c44bdbf093 Message-ID: <20091105191148.D0C577EEE5@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1257439895 -3600 # Node ID 686d625c298ceb791c05fa4b7811a00ab3b9f88e # Parent 60c44bdbf093285dc69d5462d4dbb4acad325ca6 Added tag 1.1.0 for changeset 60c44bdbf093 --- a/.hgtags +++ b/.hgtags @@ -18,3 +18,4 @@ 5ea0cdf7854c3d4278d36eda94a2b68483a0e211 7acde360d94b6a2690ce3d03ff39301da84c0a2b 1.0.0 6bd221981ac99103002c1cb94fede400d23a96a1 1.0.1 4816e8b80602a3fd3a0a120333ad85fbe7d8bab4 1.0.2 +60c44bdbf093285dc69d5462d4dbb4acad325ca6 1.1.0 From commits-noreply at bitbucket.org Thu Nov 5 20:11:48 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Thu, 5 Nov 2009 19:11:48 +0000 (UTC) Subject: [py-svn] py-trunk commit 60c44bdbf093: fix up install docs and plugin docs for the final release Message-ID: <20091105191148.C49117EEE4@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1257439574 -3600 # Node ID 60c44bdbf093285dc69d5462d4dbb4acad325ca6 # Parent ae71a1d63d0d79f61a5f047d4c38c0f457ba038a fix up install docs and plugin docs for the final release have CHANGELOG be a file containing links instead of a symlink beause it causes issues with pip-install on some systems. --- a/doc/install.txt +++ b/doc/install.txt @@ -3,29 +3,77 @@ Downloading ============== - .. _`PyPI project page`: http://pypi.python.org/pypi/py/ +.. _`PyPI project page`: http://pypi.python.org/pypi/py/ - Latest Release, see `PyPI project page`_ -using easy_install (via Distribute or setuptools) +py.test/pylib compat/install info in a nutshell =================================================== -It is recommended to use `Distribute for installation`_ as a drop-in -replacement for setuptools_. While setuptools should work well on -Python2 versions, `Distribute`_ allows to install py.test on Python3 -and it avoids issue on Windows. With either packaging system -you can type:: +PyPI Pyckage name: "**py**", see `PyPI project page`_ for latest version + +Installers: easy_install_ and pip_, setuptools_ or Distribute_ + +Pythons: 2.4, 2.5, 2.6, 3.0, 3.1, Jython-2.5.1, PyPy-1.1 + +Operating systems: Linux, Windows and OSX + probably many others + + +Best practise: install tool and dependencies virtually +=========================================================== + +It is recommended to work with virtual environments +(e.g. virtualenv_ or buildout_ based) and use easy_install_ +(or pip_) for installing py.test/pylib and any dependencies +you need to run your tests. Local virtual Python environments +(as opposed to system-wide "global" environments) make for a more +reproducible and reliable test environment. + +Note: as of November 2009 pytest/pylib 1.1 RPMs and DEB packages +are not available. If you want to easy_install the newest py.test +and pylib do everyone a favour and uninstall older versions +from the global system e.g. like this on Ubuntu:: + + sudo apt-get remove --purge python-codespeak-lib + +.. _`virtualenv`: http://pypi.python.org/pypi/virtualenv +.. _`buildout`: http://www.buildout.org/ +.. _pip: http://pypi.python.org/pypi/pip +.. _`easy_install`: + +using easy_install (from setuptools or Distribute) +=================================================== + +Both `Distribute`_ and setuptools_ provide the ``easy_install`` +installation tool. While setuptools should work ok with +Python2 interpreters, `Distribute`_ also works with Python3 +and it avoids some issues on Windows. In both cases you +can open a command line window and then type:: easy_install -U py -to get the latest release of the py lib and py.test. The ``-U`` switch +to install the latest release of the py lib and py.test. The ``-U`` switch will trigger an upgrade if you already have an older version installed. -On Linux systems you may need to execute the command as superuser and -on Windows you might need to write down the full path to ``easy_install``. -The py lib and its tools are expected to work well on Linux, -Windows and OSX, Python versions 2.4, 2.5, 2.6 through to -the Python3 versions 3.0 and 3.1 and Jython +If you now type:: + + py.test --version + +you should see the version number and the import location of the tool. +Maybe you want to head on with the `quickstart`_ now? + +.. _quickstart: test/quickstart.html + +Troubleshooting +======================== + +**On Linux**: If ``easy_install`` fails because it needs to run +as the superuser you are trying to install things globally +and need to put ``sudo`` in front of the command. + +**On Windows**: If "easy_install" or "py.test" are not found +please see here: `How do i run a Python program under Windows?`_ + +.. _`How do i run a Python program under Windows?`: http://www.python.org/doc/faq/windows/#how-do-i-run-a-python-program-under-windows .. _mercurial: http://mercurial.selenic.com/wiki/ .. _`Distribute`: @@ -37,9 +85,8 @@ the Python3 versions 3.0 and 3.1 and Jyt Working from version control or a tarball ================================================= -To follow development or help with fixing things -for the next release, checkout the complete code -and documentation source with mercurial_:: +To follow development or start experiments, checkout the +complete code and documentation source with mercurial_:: hg clone https://bitbucket.org/hpk42/py-trunk/ @@ -68,14 +115,14 @@ in order to work inline with the tools a .. _`directly use a checkout`: -directly use a checkout or tarball +directly use a checkout or tarball ------------------------------------------------------------- -Once you got yourself a checkout_ or tarball_ you only need to -set ``PYTHONPATH`` and ``PATH`` environment variables. -It is usually a good idea to add the parent directory of the ``py`` package -directory to your ``PYTHONPATH`` and ``py/bin`` or ``py\bin\win32`` to your -system wide ``PATH`` settings. There are helper scripts that set ``PYTHONPATH`` and ``PATH`` on your system: +Once you got yourself a checkout_ or tarball_ it is usually a good +idea to add the parent directory of the ``py`` package directory +to your ``PYTHONPATH`` and ``py/bin`` or ``py\bin\win32`` to your +system wide ``PATH`` settings. There are helper scripts that +set ``PYTHONPATH`` and ``PATH`` on your system: on windows execute:: @@ -90,6 +137,9 @@ on linux/OSX add this to your shell init both of which which will get you good settings for ``PYTHONPATH`` and ``PATH``. +If you install ``py.test`` this way you can easily +``hg pull && hg up`` your checkout to follow the +development tree. note: scripts look for "nearby" py-lib ----------------------------------------------------- @@ -104,22 +154,15 @@ for "nearby" py libs, so if you have a l py/ issuing ``py.test subpkg1`` will use the py lib -from that projects root directory. +from that projects root directory. Giving the +state of Python packaging there can be confusion +in which case issuing:: + + py.test --version + +tells you both version and import location of the tool. .. _`command line scripts`: bin.html - -Debian and RPM packages -=================================== - -As of August 2009 pytest/pylib 1.0 RPMs and Debian packages -are not available. You will only find 0.9 versions - -on Debian systems look for ``python-codespeak-lib`` -and Dwayne Bailey has put together a Fedora `RPM`_. - -If you can help with providing/upgrading distribution -packages please use of the contact_ channels in case -of questions or need for changes. - .. _contact: contact.html .. _`RPM`: http://translate.sourceforge.net/releases/testing/fedora/pylib-0.9.2-1.fc9.noarch.rpm --- a/py/plugin/pytest_mark.py +++ b/py/plugin/pytest_mark.py @@ -65,6 +65,15 @@ The order in which marker functions are Later called markers may overwrite previous key-value settings. Positional arguments are all appended to the same 'args' list of the Marker object. + +Using "-k MARKNAME" to select tests +---------------------------------------------------- + +You can use the ``-k`` command line option to select +tests:: + + py.test -k webtest # will only run tests marked as webtest + """ import py --- a/doc/test/quickstart.txt +++ b/doc/test/quickstart.txt @@ -7,7 +7,7 @@ Quickstart .. _here: ../install.html -If you have any ``easy_install`` (otherwise see here_) just type:: +If you have the ``easy_install`` tool (otherwise see here_) just type:: easy_install -U py @@ -30,7 +30,7 @@ and will see output like this: .. sourcecode:: python =========================== test session starts ============================ - python: platform linux2 -- Python 2.6.2 + python: platform linux2 -- Python 2.6.2 -- pytest-1.1.0 test object 1: test_sample.py test_sample.py F @@ -51,15 +51,18 @@ a progress report and important details **Where to go from here** -`tutorials`_: a collection of starting points with code examples - `features`_: overview and description of test features -`contact`_: many ways for feedback and questions +`plugins`_: a list of available plugins which each contain usage examples + +`tutorials`_: some blog entries and starting points with code examples + +`contact`_: if you want to feedback or have problems .. _`contact`: ../contact.html .. _`automatically collected`: features.html#autocollect .. _install: ../install.html +.. _plugins: plugin/index.html .. _features: features.html .. _tutorials: talks.html --- a/doc/test/plugin/skipping.txt +++ b/doc/test/plugin/skipping.txt @@ -9,7 +9,7 @@ advanced skipping for python test functi With this plugin you can mark test functions for conditional skipping or as "xfail", expected-to-fail. Skipping a test will avoid running it -while xfail-marked tests will run and result in an inverted outcome: +at all while xfail-marked tests will run and result in an inverted outcome: a pass becomes a failure and a fail becomes a semi-passing one. The need for skipping a test is usually connected to a condition. @@ -22,29 +22,37 @@ at the end of a test run. .. _skipif: -mark a test function to be skipped +Skipping a single function ------------------------------------------- -Here is an example for skipping a test function when -running on Python3:: +Here is an example for marking a test function to be skipped +when run on a Python3 interpreter:: @py.test.mark.skipif("sys.version_info >= (3,0)") def test_function(): ... - During test function setup the skipif condition is evaluated by calling ``eval(expr, namespace)``. The namespace -contains the ``sys`` and ``os`` modules as well as the -test ``config`` object. The latter allows you to skip based +contains the ``sys`` and ``os`` modules and the test +``config`` object. The latter allows you to skip based on a test configuration value e.g. like this:: @py.test.mark.skipif("not config.getvalue('db')") def test_function(...): ... +Create a shortcut for your conditional skip decorator +at module level like this:: -mark many test functions at once + win32only = py.test.mark.skipif("sys.platform != 'win32'") + + @win32only + def test_function(): + ... + + +skip groups of test functions -------------------------------------- As with all metadata function marking you can do it at @@ -58,11 +66,12 @@ for skipping all methods of a test class # will not be setup or run under 'win32' platform # +The ``pytestmark`` decorator will be applied to each test function. .. _`whole class- or module level`: mark.html#scoped-marking -mark a test function as expected to fail +mark a test function as **expected to fail** ------------------------------------------------------- You can use the ``xfail`` marker to indicate that you @@ -79,7 +88,7 @@ when it fails. Instead terminal reportin Same as with skipif_ you can also selectively expect a failure depending on platform:: - @py.test.mark.xfail(if"sys.version_info >= (3,0)") + @py.test.mark.xfail("sys.version_info >= (3,0)") def test_function(): ... @@ -89,7 +98,7 @@ skipping on a missing import dependency -------------------------------------------------- You can use the following import helper at module level -or within a test or setup function. +or within a test or test setup function:: docutils = py.test.importorskip("docutils") --- a/py/plugin/pytest_skipping.py +++ b/py/plugin/pytest_skipping.py @@ -3,7 +3,7 @@ advanced skipping for python test functi With this plugin you can mark test functions for conditional skipping or as "xfail", expected-to-fail. Skipping a test will avoid running it -while xfail-marked tests will run and result in an inverted outcome: +at all while xfail-marked tests will run and result in an inverted outcome: a pass becomes a failure and a fail becomes a semi-passing one. The need for skipping a test is usually connected to a condition. @@ -16,29 +16,37 @@ at the end of a test run. .. _skipif: -mark a test function to be skipped +Skipping a single function ------------------------------------------- -Here is an example for skipping a test function when -running on Python3:: +Here is an example for marking a test function to be skipped +when run on a Python3 interpreter:: @py.test.mark.skipif("sys.version_info >= (3,0)") def test_function(): ... - During test function setup the skipif condition is evaluated by calling ``eval(expr, namespace)``. The namespace -contains the ``sys`` and ``os`` modules as well as the -test ``config`` object. The latter allows you to skip based +contains the ``sys`` and ``os`` modules and the test +``config`` object. The latter allows you to skip based on a test configuration value e.g. like this:: @py.test.mark.skipif("not config.getvalue('db')") def test_function(...): ... +Create a shortcut for your conditional skip decorator +at module level like this:: -mark many test functions at once + win32only = py.test.mark.skipif("sys.platform != 'win32'") + + @win32only + def test_function(): + ... + + +skip groups of test functions -------------------------------------- As with all metadata function marking you can do it at @@ -52,11 +60,12 @@ for skipping all methods of a test class # will not be setup or run under 'win32' platform # +The ``pytestmark`` decorator will be applied to each test function. .. _`whole class- or module level`: mark.html#scoped-marking -mark a test function as expected to fail +mark a test function as **expected to fail** ------------------------------------------------------- You can use the ``xfail`` marker to indicate that you @@ -73,7 +82,7 @@ when it fails. Instead terminal reportin Same as with skipif_ you can also selectively expect a failure depending on platform:: - @py.test.mark.xfail(if"sys.version_info >= (3,0)") + @py.test.mark.xfail("sys.version_info >= (3,0)") def test_function(): ... --- a/doc/test/plugin/mark.txt +++ b/doc/test/plugin/mark.txt @@ -70,7 +70,15 @@ The order in which marker functions are Later called markers may overwrite previous key-value settings. Positional arguments are all appended to the same 'args' list -of the Marker object. +of the Marker object. + +Using "-k MARKNAME" to select tests +---------------------------------------------------- + +You can use the ``-k`` command line option to select +tests:: + + py.test -k webtest # will only run tests marked as webtest Start improving this plugin in 30 seconds ========================================= --- a/CHANGELOG +++ b/CHANGELOG @@ -1,1 +1,7 @@ -doc/changelog.txt + +see doc/announce/release-1.1.0.txt for a summary +of the last minor release + +and + +see doc/changelog.txt for details --- a/doc/test/plugin/xmlresult.txt +++ b/doc/test/plugin/xmlresult.txt @@ -1,6 +1,9 @@ -pytest_coverage plugin (EXTERNAL) +pytest_xmlresult plugin (EXTERNAL) ========================================== -This plugin allows to use Ned's coverage package, see +This plugin allows to write results in an XML format +compatible to CruiseControl_, see here for download: http://github.com/rozza/py.test-plugins + +.. _CruiseControl: http://cruisecontrol.sourceforge.net/ --- a/doc/test/plugin/coverage.txt +++ b/doc/test/plugin/coverage.txt @@ -1,10 +1,9 @@ -pytest_xmlresult plugin (EXTERNAL) +pytest_coverage plugin (EXTERNAL) ========================================== -This plugin allows to write results in an XML format -compatible to CruiseControl_, see here for download: +This plugin allows to use Ned's coverage_ package, see http://github.com/rozza/py.test-plugins -.. _CruiseControl: http://cruisecontrol.sourceforge.net/ +.. _coverage: http://pypi.python.org/pypi/coverage --- a/doc/announce/release-1.1.0.txt +++ b/doc/announce/release-1.1.0.txt @@ -3,30 +3,30 @@ py.test/pylib 1.1.0: Python3, Jython, ad Features: -* compatible to Python3 (single py2/py3 source), works with Distribute +* compatible to Python3 (single py2/py3 source), `easy to install`_ * generalized marking_: mark tests one a whole-class or whole-module basis * conditional skipping_: skip/xfail based on platform/dependencies Fixes: * code reduction and "de-magification" (e.g. 23 KLoc -> 11 KLOC) -* distribute testing requires the now separately released 'execnet' package +* distribute testing requires the now separately released execnet_ package * funcarg-setup/caching, "same-name" test modules now cause an exlicit error -* de-cluttered reporting, --report option for skipped/xfail details +* de-cluttered reporting options, --report for skipped/xfail details Compatibilities 1.1.0 should allow running test code that already worked well with 1.0.2 plus some more due to improved unittest/nose compatibility. -More information: - - http://pytest.org +More information: http://pytest.org thanks and have fun, holger (http://twitter.com/hpk42) +.. _execnet: http://codespeak.net/execnet +.. _`easy to install`: ../install.html .. _marking: ../test/plugin/mark.html .. _skipping: ../test/plugin/skipping.html From commits-noreply at bitbucket.org Thu Nov 12 13:11:24 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Thu, 12 Nov 2009 12:11:24 +0000 (UTC) Subject: [py-svn] py-trunk commit 6f58c972cbcb: fix a bug with svnwc.listdir() not accepting a checker(versioned=...) Message-ID: <20091112121124.AC1657EF19@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1258027767 -3600 # Node ID 6f58c972cbcb6be7427471fa596dd5a17e09e328 # Parent 686d625c298ceb791c05fa4b7811a00ab3b9f88e fix a bug with svnwc.listdir() not accepting a checker(versioned=...) --- a/py/impl/path/svnwc.py +++ b/py/impl/path/svnwc.py @@ -808,9 +808,11 @@ recursively. """ def notsvn(path): return path.basename != '.svn' - paths = [self.__class__(p, auth=self.auth) - for p in self.localpath.listdir() - if notsvn(p) and (not fil or fil(p))] + paths = [] + for localpath in self.localpath.listdir(notsvn): + p = self.__class__(localpath, auth=self.auth) + if notsvn(p) and (not fil or fil(p)): + paths.append(p) self._sortlist(paths, sort) return paths --- a/testing/path/test_svnwc.py +++ b/testing/path/test_svnwc.py @@ -276,6 +276,13 @@ class TestWCSvnCommandPath(CommonSvnTest finally: notexisting.remove() + def test_listdir_versioned(self, path1): + assert path1.check(versioned=1) + p = path1.localpath.ensure("not_a_versioned_file") + l = [x.localpath + for x in path1.listdir(lambda x: x.check(versioned=True))] + assert p not in l + def test_nonversioned_remove(self, path1): assert path1.check(versioned=1) somefile = path1.join('nonversioned/somefile') From commits-noreply at bitbucket.org Thu Nov 12 13:11:26 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Thu, 12 Nov 2009 12:11:26 +0000 (UTC) Subject: [py-svn] py-trunk commit a8ff9595a5b9: improve deprecation, start changelog Message-ID: <20091112121126.3EA0B7EF1E@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1258027830 -3600 # Node ID a8ff9595a5b97b6015e283af4bf69cca38f956e0 # Parent 6f58c972cbcb6be7427471fa596dd5a17e09e328 improve deprecation, start changelog --- a/py/impl/compat/dep_subprocess.py +++ b/py/impl/compat/dep_subprocess.py @@ -1,4 +1,5 @@ import py -py.log._apiwarn("1.1", "py.compat.subprocess deprecated, use standard library version.", stacklevel="initpkg") +py.log._apiwarn("1.1", "py.compat.subprocess deprecated, use standard library version.", +stacklevel="apipkg") subprocess = py.std.subprocess --- a/testing/log/test_warning.py +++ b/testing/log/test_warning.py @@ -28,23 +28,30 @@ def test_stacklevel(): assert warning.find(":%s" % lno) != -1 def test_stacklevel_initpkg_with_resolve(testdir): - mod = testdir.makepyfile(initpkg=""" + testdir.makepyfile(modabc=""" import py + def f(): + py.log._apiwarn("x", "some", stacklevel="apipkg123") + """) + testdir.makepyfile(apipkg123=""" def __getattr__(): - f() - def f(): - py.log._apiwarn("x", "some", stacklevel="initpkg") - """).pyimport() + import modabc + modabc.f() + """) + p = testdir.makepyfile(""" + import apipkg123 + apipkg123.__getattr__() + """) capture = py.io.StdCapture() - mod.__getattr__() + p.pyimport() out, err = capture.reset() - lno = py.code.getrawcode(test_stacklevel_initpkg_with_resolve).co_firstlineno + 9 warning = str(err) - assert warning.find(":%s" % lno) != -1 + loc = 'test_stacklevel_initpkg_with_resolve.py:2' + assert warning.find(loc) != -1 def test_stacklevel_initpkg_no_resolve(): def f(): - py.log._apiwarn("x", "some", stacklevel="initpkg") + py.log._apiwarn("x", "some", stacklevel="apipkg") capture = py.io.StdCapture() f() out, err = capture.reset() --- a/testing/pytest/dist/acceptance_test.py +++ b/testing/pytest/dist/acceptance_test.py @@ -147,6 +147,6 @@ class TestDistribution: args += ["--tx", "popen//python=%s" % interpreters[0]] args += ["--tx", "popen//python=%s" % interpreters[1]] result = testdir.runpytest(*args) - result.stdout.fnmatch_lines(["2...4"]) - result.stdout.fnmatch_lines(["2...5"]) - + s = result.stdout.str() + assert "2.4" in s + assert "2.5" in s --- a/py/__init__.py +++ b/py/__init__.py @@ -9,7 +9,7 @@ dictionary or an import path. (c) Holger Krekel and others, 2009 """ -version = "1.1.0" +version = "1.1.1" __version__ = version = version or "1.1.x" import py.apipkg --- a/py/impl/compat/dep_optparse.py +++ b/py/impl/compat/dep_optparse.py @@ -1,4 +1,4 @@ import py -py.log._apiwarn("1.1", "py.compat.optparse deprecated, use standard library version.", stacklevel="initpkg") +py.log._apiwarn("1.1", "py.compat.optparse deprecated, use standard library version.", stacklevel="apipkg") optparse = py.std.optparse --- a/py/impl/compat/dep_doctest.py +++ b/py/impl/compat/dep_doctest.py @@ -1,4 +1,5 @@ import py -py.log._apiwarn("1.1", "py.compat.doctest deprecated, use standard library version.", stacklevel="initpkg") +py.log._apiwarn("1.1", "py.compat.doctest deprecated, use standard library version.", +stacklevel="apipkg") doctest = py.std.doctest --- a/setup.py +++ b/setup.py @@ -28,7 +28,7 @@ def main(): name='py', description='py.test and pylib: rapid testing and development utils.', long_description = long_description, - version= trunk or '1.1.0', + version= trunk or '1.1.1', url='http://pylib.org', license='MIT license', platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'], --- a/testing/test_compat_deprecation.py +++ b/testing/test_compat_deprecation.py @@ -7,9 +7,10 @@ def test_functional_deprecation(testdir) check(recwarn, name) def check(recwarn, name): x = getattr(py.compat, name) - recwarn.pop(DeprecationWarning) + warn = recwarn.pop(DeprecationWarning) recwarn.clear() assert x == getattr(py.std, name) + assert warn.filename.find("test_functional_deprecation.py") != -1 """) result = testdir.runpytest() assert result.ret == 0 --- a/doc/changelog.txt +++ b/doc/changelog.txt @@ -1,3 +1,12 @@ +Changes between 1.1.1 and 1.1.0 +===================================== + +- fix a bug with path.check(versioned=True) for svn paths + +- try harder to have deprecation warnings for py.compat.* accesses + report a correct location + + Changes between 1.1.0 and 1.0.2 ===================================== --- a/py/impl/log/warning.py +++ b/py/impl/log/warning.py @@ -13,14 +13,18 @@ class Warning(DeprecationWarning): def _apiwarn(startversion, msg, stacklevel=2, function=None): # below is mostly COPIED from python2.4/warnings.py's def warn() # Get context information - if stacklevel == "initpkg": - frame = sys._getframe(stacklevel == "initpkg" and 1 or stacklevel) - level = 2 + if isinstance(stacklevel, str): + frame = sys._getframe(1) + level = 1 + found = frame.f_code.co_filename.find(stacklevel) != -1 while frame: co = frame.f_code - if co.co_name == "__getattr__" and co.co_filename.find("initpkg") !=-1: - stacklevel = level - break + if co.co_filename.find(stacklevel) == -1: + if found: + stacklevel = level + break + else: + found = True level += 1 frame = frame.f_back else: --- a/py/impl/compat/dep_textwrap.py +++ b/py/impl/compat/dep_textwrap.py @@ -1,4 +1,5 @@ import py -py.log._apiwarn("1.1", "py.compat.textwrap deprecated, use standard library version.", stacklevel="initpkg") +py.log._apiwarn("1.1", "py.compat.textwrap deprecated, use standard library version.", + stacklevel="apipkg") textwrap = py.std.textwrap From commits-noreply at bitbucket.org Fri Nov 20 00:20:07 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Thu, 19 Nov 2009 23:20:07 +0000 (UTC) Subject: [py-svn] py-trunk commit 8862254d6357: a few internal test related fixes as to run on a osx/no-execnet situation Message-ID: <20091119232007.60EDF7EEEF@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1258056959 -3600 # Node ID 8862254d63573ac014dd38f13a746da0e15deca9 # Parent a8ff9595a5b97b6015e283af4bf69cca38f956e0 a few internal test related fixes as to run on a osx/no-execnet situation --- a/testing/plugin/test_pytest_capture.py +++ b/testing/plugin/test_pytest_capture.py @@ -133,8 +133,8 @@ class TestPerTestCapturing: "setup test_func1*", "in func1*", "setup test_func2*", - "in func2*", - ]) + "in func2*", + ]) @py.test.mark.xfail def test_capture_scope_cache(self, testdir): --- a/doc/install.txt +++ b/doc/install.txt @@ -90,11 +90,7 @@ complete code and documentation source w hg clone https://bitbucket.org/hpk42/py-trunk/ -Development usually takes place on the 'trunk' branch. - -.. There also is a readonly subversion - checkout available which contains the latest release:: - svn co https://codespeak.net/svn/py/dist +Development takes place on the 'trunk' branch. You can also go to the python package index and download and unpack a TAR file:: --- a/testing/path/test_local.py +++ b/testing/path/test_local.py @@ -15,23 +15,6 @@ def pytest_funcarg__path1(request): assert path1.join("samplefile").check() return request.cached_setup(setup, teardown, scope="session") -def pytest_funcarg__tmpdir(request): - basedir = request.config.getbasetemp() - if request.cls: - try: - basedir = basedir.mkdir(request.cls.__name__) - except py.error.EEXIST: - pass - for i in range(1000): - name = request.function.__name__ - if i > 0: - name += str(i) - try: - return basedir.mkdir(name) - except py.error.EEXIST: - continue - raise ValueError("could not create tempdir") - class TestLocalPath(common.CommonFSTests): def test_join_normpath(self, tmpdir): assert tmpdir.join(".") == tmpdir --- a/testing/test_py_imports.py +++ b/testing/test_py_imports.py @@ -28,12 +28,7 @@ def test_importall(): base = py._impldir nodirs = [ base.join('test', 'testing', 'data'), - base.join('test', 'web'), base.join('path', 'gateway',), - base.join('doc',), - base.join('rest', 'directive.py'), - base.join('test', 'testing', 'import_test'), - base.join('bin'), base.join('code', 'oldmagic.py'), base.join('execnet', 'script'), base.join('compat', 'testing'), @@ -46,6 +41,11 @@ def test_importall(): def recurse(p): return p.check(dotfile=0) and p.basename != "attic" + try: + import execnet + except ImportError: + execnet = None + for p in base.visit('*.py', recurse): if p.basename == '__init__.py': continue @@ -57,6 +57,10 @@ def test_importall(): else: relpath = relpath.replace(base.sep, '.') modpath = 'py.impl.%s' % relpath + if modpath.startswith("py.impl.test.dist") or \ + modpath.startswith("py.impl.test.looponfail"): + if not execnet: + continue check_import(modpath) def check_import(modpath): --- a/.hgignore +++ b/.hgignore @@ -18,3 +18,4 @@ build/ dist/ py.egg-info issue/ +3rdparty/ --- a/py/plugin/pytest_tmpdir.py +++ b/py/plugin/pytest_tmpdir.py @@ -18,4 +18,5 @@ def pytest_funcarg__tmpdir(request): path object. """ name = request.function.__name__ - return request.config.mktemp(name, numbered=True) + x = request.config.mktemp(name, numbered=True) + return x.realpath() From commits-noreply at bitbucket.org Fri Nov 20 00:20:07 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Thu, 19 Nov 2009 23:20:07 +0000 (UTC) Subject: [py-svn] py-trunk commit c2d56702f15d: a bit of padding under the logo Message-ID: <20091119232007.7D4797EEF0@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1258461927 -3600 # Node ID c2d56702f15d2e7b5d89ad024223e17f71015e02 # Parent 8862254d63573ac014dd38f13a746da0e15deca9 a bit of padding under the logo --- a/doc/style.css +++ b/doc/style.css @@ -744,6 +744,7 @@ td.toplist { img#pyimg { float: left; + padding-bottom: 1em; } div#navspace { From commits-noreply at bitbucket.org Fri Nov 20 00:20:15 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Thu, 19 Nov 2009 23:20:15 +0000 (UTC) Subject: [py-svn] py-trunk commit 92a3ec562e03: move CHANGELOG back to root level, add entries Message-ID: <20091119232015.36F157EEF6@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1258672359 -3600 # Node ID 92a3ec562e0322c1911c5751d567cbb88210aee4 # Parent 5c1b2b05f69c82c686de95ab6dc31ee512188008 move CHANGELOG back to root level, add entries --- a/doc/changelog.txt +++ b/doc/changelog.txt @@ -1,358 +1,2 @@ -Changes between 1.1.1 and 1.1.0 -===================================== -- re-introduce py.test.cmdline.main for backward compatibility - -- fix a bug with path.check(versioned=True) for svn paths - -- try harder to have deprecation warnings for py.compat.* accesses - report a correct location - -Changes between 1.1.0 and 1.0.2 -===================================== - -* adjust and improve docs - -* remove py.rest tool and internal namespace - it was - never really advertised and can still be used with - the old release if needed. If there is interest - it could be revived into its own tool i guess. - -* fix issue48 and issue59: raise an Error if the module - from an imported test file does not seem to come from - the filepath - avoids "same-name" confusion that has - been reported repeatedly - -* merged Ronny's nose-compatibility hacks: now - nose-style setup_module() and setup() functions are - supported - -* introduce generalized py.test.mark function marking - -* reshuffle / refine command line grouping - -* deprecate parser.addgroup in favour of getgroup which creates option group - -* add --report command line option that allows to control showing of skipped/xfailed sections - -* generalized skipping: a new way to mark python functions with skipif or xfail - at function, class and modules level based on platform or sys-module attributes. - -* extend py.test.mark decorator to allow for positional args - -* introduce and test "py.cleanup -d" to remove empty directories - -* fix issue #59 - robustify unittest test collection - -* make bpython/help interaction work by adding an __all__ attribute - to ApiModule, cleanup initpkg - -* use MIT license for pylib, add some contributors - -* remove py.execnet code and substitute all usages with 'execnet' proper - -* fix issue50 - cached_setup now caches more to expectations - for test functions with multiple arguments. - -* merge Jarko's fixes, issue #45 and #46 - -* add the ability to specify a path for py.lookup to search in - -* fix a funcarg cached_setup bug probably only occuring - in distributed testing and "module" scope with teardown. - -* many fixes and changes for making the code base python3 compatible, - many thanks to Benjamin Peterson for helping with this. - -* consolidate builtins implementation to be compatible with >=2.3, - add helpers to ease keeping 2 and 3k compatible code - -* deprecate py.compat.doctest|subprocess|textwrap|optparse - -* deprecate py.magic.autopath, remove py/magic directory - -* move pytest assertion handling to py/code and a pytest_assertion - plugin, add "--no-assert" option, deprecate py.magic namespaces - in favour of (less) py.code ones. - -* consolidate and cleanup py/code classes and files - -* cleanup py/misc, move tests to bin-for-dist - -* introduce delattr/delitem/delenv methods to py.test's monkeypatch funcarg - -* consolidate py.log implementation, remove old approach. - -* introduce py.io.TextIO and py.io.BytesIO for distinguishing between - text/unicode and byte-streams (uses underlying standard lib io.* - if available) - -* make py.unittest_convert helper script available which converts "unittest.py" - style files into the simpler assert/direct-test-classes py.test/nosetests - style. The script was written by Laura Creighton. - -* simplified internal localpath implementation - -Changes between 1.0.1 and 1.0.2 -===================================== - -* fixing packaging issues, triggered by fedora redhat packaging, - also added doc, examples and contrib dirs to the tarball. - -* added a documentation link to the new django plugin. - -Changes between 1.0.0 and 1.0.1 -===================================== - -* added a 'pytest_nose' plugin which handles nose.SkipTest, - nose-style function/method/generator setup/teardown and - tries to report functions correctly. - -* capturing of unicode writes or encoded strings to sys.stdout/err - work better, also terminalwriting was adapted and somewhat - unified between windows and linux. - -* improved documentation layout and content a lot - -* added a "--help-config" option to show conftest.py / ENV-var names for - all longopt cmdline options, and some special conftest.py variables. - renamed 'conf_capture' conftest setting to 'option_capture' accordingly. - -* fix issue #27: better reporting on non-collectable items given on commandline - (e.g. pyc files) - -* fix issue #33: added --version flag (thanks Benjamin Peterson) - -* fix issue #32: adding support for "incomplete" paths to wcpath.status() - -* "Test" prefixed classes are *not* collected by default anymore if they - have an __init__ method - -* monkeypatch setenv() now accepts a "prepend" parameter - -* improved reporting of collection error tracebacks - -* simplified multicall mechanism and plugin architecture, - renamed some internal methods and argnames - -Changes between 1.0.0b9 and 1.0.0 -===================================== - -* more terse reporting try to show filesystem path relatively to current dir -* improve xfail output a bit - -Changes between 1.0.0b8 and 1.0.0b9 -===================================== - -* cleanly handle and report final teardown of test setup - -* fix svn-1.6 compat issue with py.path.svnwc().versioned() - (thanks Wouter Vanden Hove) - -* setup/teardown or collection problems now show as ERRORs - or with big "E"'s in the progress lines. they are reported - and counted separately. - -* dist-testing: properly handle test items that get locally - collected but cannot be collected on the remote side - often - due to platform/dependency reasons - -* simplified py.test.mark API - see keyword plugin documentation - -* integrate better with logging: capturing now by default captures - test functions and their immediate setup/teardown in a single stream - -* capsys and capfd funcargs now have a readouterr() and a close() method - (underlyingly py.io.StdCapture/FD objects are used which grew a - readouterr() method as well to return snapshots of captured out/err) - -* make assert-reinterpretation work better with comparisons not - returning bools (reported with numpy from thanks maciej fijalkowski) - -* reworked per-test output capturing into the pytest_iocapture.py plugin - and thus removed capturing code from config object - -* item.repr_failure(excinfo) instead of item.repr_failure(excinfo, outerr) - - -Changes between 1.0.0b7 and 1.0.0b8 -===================================== - -* pytest_unittest-plugin is now enabled by default - -* introduced pytest_keyboardinterrupt hook and - refined pytest_sessionfinish hooked, added tests. - -* workaround a buggy logging module interaction ("closing already closed - files"). Thanks to Sridhar Ratnakumar for triggering. - -* if plugins use "py.test.importorskip" for importing - a dependency only a warning will be issued instead - of exiting the testing process. - -* many improvements to docs: - - refined funcargs doc , use the term "factory" instead of "provider" - - added a new talk/tutorial doc page - - better download page - - better plugin docstrings - - added new plugins page and automatic doc generation script - -* fixed teardown problem related to partially failing funcarg setups - (thanks MrTopf for reporting), "pytest_runtest_teardown" is now - always invoked even if the "pytest_runtest_setup" failed. - -* tweaked doctest output for docstrings in py modules, - thanks Radomir. - -Changes between 1.0.0b3 and 1.0.0b7 -============================================= - -* renamed py.test.xfail back to py.test.mark.xfail to avoid - two ways to decorate for xfail - -* re-added py.test.mark decorator for setting keywords on functions - (it was actually documented so removing it was not nice) - -* remove scope-argument from request.addfinalizer() because - request.cached_setup has the scope arg. TOOWTDI. - -* perform setup finalization before reporting failures - -* apply modified patches from Andreas Kloeckner to allow - test functions to have no func_code (#22) and to make - "-k" and function keywords work (#20) - -* apply patch from Daniel Peolzleithner (issue #23) - -* resolve issue #18, multiprocessing.Manager() and - redirection clash - -* make __name__ == "__channelexec__" for remote_exec code - -Changes between 1.0.0b1 and 1.0.0b3 -============================================= - -* plugin classes are removed: one now defines - hooks directly in conftest.py or global pytest_*.py - files. - -* added new pytest_namespace(config) hook that allows - to inject helpers directly to the py.test.* namespace. - -* documented and refined many hooks - -* added new style of generative tests via - pytest_generate_tests hook that integrates - well with function arguments. - - -Changes between 0.9.2 and 1.0.0b1 -============================================= - -* introduced new "funcarg" setup method, - see doc/test/funcarg.txt - -* introduced plugin architecuture and many - new py.test plugins, see - doc/test/plugins.txt - -* teardown_method is now guaranteed to get - called after a test method has run. - -* new method: py.test.importorskip(mod,minversion) - will either import or call py.test.skip() - -* completely revised internal py.test architecture - -* new py.process.ForkedFunc object allowing to - fork execution of a function to a sub process - and getting a result back. - -XXX lots of things missing here XXX - -Changes between 0.9.1 and 0.9.2 -=============================== - -* refined installation and metadata, created new setup.py, - now based on setuptools/ez_setup (thanks to Ralf Schmitt - for his support). - -* improved the way of making py.* scripts available in - windows environments, they are now added to the - Scripts directory as ".cmd" files. - -* py.path.svnwc.status() now is more complete and - uses xml output from the 'svn' command if available - (Guido Wesdorp) - -* fix for py.path.svn* to work with svn 1.5 - (Chris Lamb) - -* fix path.relto(otherpath) method on windows to - use normcase for checking if a path is relative. - -* py.test's traceback is better parseable from editors - (follows the filenames:LINENO: MSG convention) - (thanks to Osmo Salomaa) - -* fix to javascript-generation, "py.test --runbrowser" - should work more reliably now - -* removed previously accidentally added - py.test.broken and py.test.notimplemented helpers. - -* there now is a py.__version__ attribute - -Changes between 0.9.0 and 0.9.1 -=============================== - -This is a fairly complete list of changes between 0.9 and 0.9.1, which can -serve as a reference for developers. - -* allowing + signs in py.path.svn urls [39106] -* fixed support for Failed exceptions without excinfo in py.test [39340] -* added support for killing processes for Windows (as well as platforms that - support os.kill) in py.misc.killproc [39655] -* added setup/teardown for generative tests to py.test [40702] -* added detection of FAILED TO LOAD MODULE to py.test [40703, 40738, 40739] -* fixed problem with calling .remove() on wcpaths of non-versioned files in - py.path [44248] -* fixed some import and inheritance issues in py.test [41480, 44648, 44655] -* fail to run greenlet tests when pypy is available, but without stackless - [45294] -* small fixes in rsession tests [45295] -* fixed issue with 2.5 type representations in py.test [45483, 45484] -* made that internal reporting issues displaying is done atomically in py.test - [45518] -* made that non-existing files are igored by the py.lookup script [45519] -* improved exception name creation in py.test [45535] -* made that less threads are used in execnet [merge in 45539] -* removed lock required for atomical reporting issue displaying in py.test - [45545] -* removed globals from execnet [45541, 45547] -* refactored cleanup mechanics, made that setDaemon is set to 1 to make atexit - get called in 2.5 (py.execnet) [45548] -* fixed bug in joining threads in py.execnet's servemain [45549] -* refactored py.test.rsession tests to not rely on exact output format anymore - [45646] -* using repr() on test outcome [45647] -* added 'Reason' classes for py.test.skip() [45648, 45649] -* killed some unnecessary sanity check in py.test.collect [45655] -* avoid using os.tmpfile() in py.io.fdcapture because on Windows it's only - usable by Administrators [45901] -* added support for locking and non-recursive commits to py.path.svnwc [45994] -* locking files in py.execnet to prevent CPython from segfaulting [46010] -* added export() method to py.path.svnurl -* fixed -d -x in py.test [47277] -* fixed argument concatenation problem in py.path.svnwc [49423] -* restore py.test behaviour that it exits with code 1 when there are failures - [49974] -* don't fail on html files that don't have an accompanying .txt file [50606] -* fixed 'utestconvert.py < input' [50645] -* small fix for code indentation in py.code.source [50755] -* fix _docgen.py documentation building [51285] -* improved checks for source representation of code blocks in py.test [51292] -* added support for passing authentication to py.path.svn* objects [52000, - 52001] -* removed sorted() call for py.apigen tests in favour of [].sort() to support - Python 2.3 [52481] +.. include:: ../CHANGELOG --- a/CHANGELOG +++ b/CHANGELOG @@ -1,7 +1,363 @@ +Changes between 1.1.1 and 1.1.0 +===================================== -see doc/announce/release-1.1.0.txt for a summary -of the last minor release +- fix py.test to work correctly with execnet >= 1.0.0b4 -and +- re-introduce py.test.cmdline.main for better backward compatibility -see doc/changelog.txt for details +- make svnwc.update() default to interactive mode like in 1.0.x + and add svnwc.update(interactive=False) to inhibit interaction. + +- fix a bug with path.check(versioned=True) for svn paths + +- try harder to have deprecation warnings for py.compat.* accesses + report a correct location + +Changes between 1.1.0 and 1.0.2 +===================================== + +* adjust and improve docs + +* remove py.rest tool and internal namespace - it was + never really advertised and can still be used with + the old release if needed. If there is interest + it could be revived into its own tool i guess. + +* fix issue48 and issue59: raise an Error if the module + from an imported test file does not seem to come from + the filepath - avoids "same-name" confusion that has + been reported repeatedly + +* merged Ronny's nose-compatibility hacks: now + nose-style setup_module() and setup() functions are + supported + +* introduce generalized py.test.mark function marking + +* reshuffle / refine command line grouping + +* deprecate parser.addgroup in favour of getgroup which creates option group + +* add --report command line option that allows to control showing of skipped/xfailed sections + +* generalized skipping: a new way to mark python functions with skipif or xfail + at function, class and modules level based on platform or sys-module attributes. + +* extend py.test.mark decorator to allow for positional args + +* introduce and test "py.cleanup -d" to remove empty directories + +* fix issue #59 - robustify unittest test collection + +* make bpython/help interaction work by adding an __all__ attribute + to ApiModule, cleanup initpkg + +* use MIT license for pylib, add some contributors + +* remove py.execnet code and substitute all usages with 'execnet' proper + +* fix issue50 - cached_setup now caches more to expectations + for test functions with multiple arguments. + +* merge Jarko's fixes, issue #45 and #46 + +* add the ability to specify a path for py.lookup to search in + +* fix a funcarg cached_setup bug probably only occuring + in distributed testing and "module" scope with teardown. + +* many fixes and changes for making the code base python3 compatible, + many thanks to Benjamin Peterson for helping with this. + +* consolidate builtins implementation to be compatible with >=2.3, + add helpers to ease keeping 2 and 3k compatible code + +* deprecate py.compat.doctest|subprocess|textwrap|optparse + +* deprecate py.magic.autopath, remove py/magic directory + +* move pytest assertion handling to py/code and a pytest_assertion + plugin, add "--no-assert" option, deprecate py.magic namespaces + in favour of (less) py.code ones. + +* consolidate and cleanup py/code classes and files + +* cleanup py/misc, move tests to bin-for-dist + +* introduce delattr/delitem/delenv methods to py.test's monkeypatch funcarg + +* consolidate py.log implementation, remove old approach. + +* introduce py.io.TextIO and py.io.BytesIO for distinguishing between + text/unicode and byte-streams (uses underlying standard lib io.* + if available) + +* make py.unittest_convert helper script available which converts "unittest.py" + style files into the simpler assert/direct-test-classes py.test/nosetests + style. The script was written by Laura Creighton. + +* simplified internal localpath implementation + +Changes between 1.0.1 and 1.0.2 +===================================== + +* fixing packaging issues, triggered by fedora redhat packaging, + also added doc, examples and contrib dirs to the tarball. + +* added a documentation link to the new django plugin. + +Changes between 1.0.0 and 1.0.1 +===================================== + +* added a 'pytest_nose' plugin which handles nose.SkipTest, + nose-style function/method/generator setup/teardown and + tries to report functions correctly. + +* capturing of unicode writes or encoded strings to sys.stdout/err + work better, also terminalwriting was adapted and somewhat + unified between windows and linux. + +* improved documentation layout and content a lot + +* added a "--help-config" option to show conftest.py / ENV-var names for + all longopt cmdline options, and some special conftest.py variables. + renamed 'conf_capture' conftest setting to 'option_capture' accordingly. + +* fix issue #27: better reporting on non-collectable items given on commandline + (e.g. pyc files) + +* fix issue #33: added --version flag (thanks Benjamin Peterson) + +* fix issue #32: adding support for "incomplete" paths to wcpath.status() + +* "Test" prefixed classes are *not* collected by default anymore if they + have an __init__ method + +* monkeypatch setenv() now accepts a "prepend" parameter + +* improved reporting of collection error tracebacks + +* simplified multicall mechanism and plugin architecture, + renamed some internal methods and argnames + +Changes between 1.0.0b9 and 1.0.0 +===================================== + +* more terse reporting try to show filesystem path relatively to current dir +* improve xfail output a bit + +Changes between 1.0.0b8 and 1.0.0b9 +===================================== + +* cleanly handle and report final teardown of test setup + +* fix svn-1.6 compat issue with py.path.svnwc().versioned() + (thanks Wouter Vanden Hove) + +* setup/teardown or collection problems now show as ERRORs + or with big "E"'s in the progress lines. they are reported + and counted separately. + +* dist-testing: properly handle test items that get locally + collected but cannot be collected on the remote side - often + due to platform/dependency reasons + +* simplified py.test.mark API - see keyword plugin documentation + +* integrate better with logging: capturing now by default captures + test functions and their immediate setup/teardown in a single stream + +* capsys and capfd funcargs now have a readouterr() and a close() method + (underlyingly py.io.StdCapture/FD objects are used which grew a + readouterr() method as well to return snapshots of captured out/err) + +* make assert-reinterpretation work better with comparisons not + returning bools (reported with numpy from thanks maciej fijalkowski) + +* reworked per-test output capturing into the pytest_iocapture.py plugin + and thus removed capturing code from config object + +* item.repr_failure(excinfo) instead of item.repr_failure(excinfo, outerr) + + +Changes between 1.0.0b7 and 1.0.0b8 +===================================== + +* pytest_unittest-plugin is now enabled by default + +* introduced pytest_keyboardinterrupt hook and + refined pytest_sessionfinish hooked, added tests. + +* workaround a buggy logging module interaction ("closing already closed + files"). Thanks to Sridhar Ratnakumar for triggering. + +* if plugins use "py.test.importorskip" for importing + a dependency only a warning will be issued instead + of exiting the testing process. + +* many improvements to docs: + - refined funcargs doc , use the term "factory" instead of "provider" + - added a new talk/tutorial doc page + - better download page + - better plugin docstrings + - added new plugins page and automatic doc generation script + +* fixed teardown problem related to partially failing funcarg setups + (thanks MrTopf for reporting), "pytest_runtest_teardown" is now + always invoked even if the "pytest_runtest_setup" failed. + +* tweaked doctest output for docstrings in py modules, + thanks Radomir. + +Changes between 1.0.0b3 and 1.0.0b7 +============================================= + +* renamed py.test.xfail back to py.test.mark.xfail to avoid + two ways to decorate for xfail + +* re-added py.test.mark decorator for setting keywords on functions + (it was actually documented so removing it was not nice) + +* remove scope-argument from request.addfinalizer() because + request.cached_setup has the scope arg. TOOWTDI. + +* perform setup finalization before reporting failures + +* apply modified patches from Andreas Kloeckner to allow + test functions to have no func_code (#22) and to make + "-k" and function keywords work (#20) + +* apply patch from Daniel Peolzleithner (issue #23) + +* resolve issue #18, multiprocessing.Manager() and + redirection clash + +* make __name__ == "__channelexec__" for remote_exec code + +Changes between 1.0.0b1 and 1.0.0b3 +============================================= + +* plugin classes are removed: one now defines + hooks directly in conftest.py or global pytest_*.py + files. + +* added new pytest_namespace(config) hook that allows + to inject helpers directly to the py.test.* namespace. + +* documented and refined many hooks + +* added new style of generative tests via + pytest_generate_tests hook that integrates + well with function arguments. + + +Changes between 0.9.2 and 1.0.0b1 +============================================= + +* introduced new "funcarg" setup method, + see doc/test/funcarg.txt + +* introduced plugin architecuture and many + new py.test plugins, see + doc/test/plugins.txt + +* teardown_method is now guaranteed to get + called after a test method has run. + +* new method: py.test.importorskip(mod,minversion) + will either import or call py.test.skip() + +* completely revised internal py.test architecture + +* new py.process.ForkedFunc object allowing to + fork execution of a function to a sub process + and getting a result back. + +XXX lots of things missing here XXX + +Changes between 0.9.1 and 0.9.2 +=============================== + +* refined installation and metadata, created new setup.py, + now based on setuptools/ez_setup (thanks to Ralf Schmitt + for his support). + +* improved the way of making py.* scripts available in + windows environments, they are now added to the + Scripts directory as ".cmd" files. + +* py.path.svnwc.status() now is more complete and + uses xml output from the 'svn' command if available + (Guido Wesdorp) + +* fix for py.path.svn* to work with svn 1.5 + (Chris Lamb) + +* fix path.relto(otherpath) method on windows to + use normcase for checking if a path is relative. + +* py.test's traceback is better parseable from editors + (follows the filenames:LINENO: MSG convention) + (thanks to Osmo Salomaa) + +* fix to javascript-generation, "py.test --runbrowser" + should work more reliably now + +* removed previously accidentally added + py.test.broken and py.test.notimplemented helpers. + +* there now is a py.__version__ attribute + +Changes between 0.9.0 and 0.9.1 +=============================== + +This is a fairly complete list of changes between 0.9 and 0.9.1, which can +serve as a reference for developers. + +* allowing + signs in py.path.svn urls [39106] +* fixed support for Failed exceptions without excinfo in py.test [39340] +* added support for killing processes for Windows (as well as platforms that + support os.kill) in py.misc.killproc [39655] +* added setup/teardown for generative tests to py.test [40702] +* added detection of FAILED TO LOAD MODULE to py.test [40703, 40738, 40739] +* fixed problem with calling .remove() on wcpaths of non-versioned files in + py.path [44248] +* fixed some import and inheritance issues in py.test [41480, 44648, 44655] +* fail to run greenlet tests when pypy is available, but without stackless + [45294] +* small fixes in rsession tests [45295] +* fixed issue with 2.5 type representations in py.test [45483, 45484] +* made that internal reporting issues displaying is done atomically in py.test + [45518] +* made that non-existing files are igored by the py.lookup script [45519] +* improved exception name creation in py.test [45535] +* made that less threads are used in execnet [merge in 45539] +* removed lock required for atomical reporting issue displaying in py.test + [45545] +* removed globals from execnet [45541, 45547] +* refactored cleanup mechanics, made that setDaemon is set to 1 to make atexit + get called in 2.5 (py.execnet) [45548] +* fixed bug in joining threads in py.execnet's servemain [45549] +* refactored py.test.rsession tests to not rely on exact output format anymore + [45646] +* using repr() on test outcome [45647] +* added 'Reason' classes for py.test.skip() [45648, 45649] +* killed some unnecessary sanity check in py.test.collect [45655] +* avoid using os.tmpfile() in py.io.fdcapture because on Windows it's only + usable by Administrators [45901] +* added support for locking and non-recursive commits to py.path.svnwc [45994] +* locking files in py.execnet to prevent CPython from segfaulting [46010] +* added export() method to py.path.svnurl +* fixed -d -x in py.test [47277] +* fixed argument concatenation problem in py.path.svnwc [49423] +* restore py.test behaviour that it exits with code 1 when there are failures + [49974] +* don't fail on html files that don't have an accompanying .txt file [50606] +* fixed 'utestconvert.py < input' [50645] +* small fix for code indentation in py.code.source [50755] +* fix _docgen.py documentation building [51285] +* improved checks for source representation of code blocks in py.test [51292] +* added support for passing authentication to py.path.svn* objects [52000, + 52001] +* removed sorted() call for py.apigen tests in favour of [].sort() to support + Python 2.3 [52481] From commits-noreply at bitbucket.org Fri Nov 20 00:20:09 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Thu, 19 Nov 2009 23:20:09 +0000 (UTC) Subject: [py-svn] py-trunk commit b5d1a500603a: reintroduce py.test.cmdline.main() (alias for py.cmdline.pytest()) Message-ID: <20091119232009.8B59C7EEF4@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1258668808 -3600 # Node ID b5d1a500603adb9df1e981027be5ebf419f5aa6d # Parent c2d56702f15d2e7b5d89ad024223e17f71015e02 reintroduce py.test.cmdline.main() (alias for py.cmdline.pytest()) resolves issue #61 --- a/py/__init__.py +++ b/py/__init__.py @@ -68,6 +68,9 @@ py.apipkg.initpkg(__name__, dict( 'Function' : '.impl.test.pycollect:Function', '_fillfuncargs' : '.impl.test.funcargs:fillfuncargs', }, + 'cmdline': { + 'main' : '.impl.test.cmdline:main', # backward compat + }, }, # hook into the top-level standard library --- a/doc/test/customize.txt +++ b/doc/test/customize.txt @@ -429,13 +429,3 @@ name. Given a filesystem ``fspath`` it * perform ``sys.path.insert(0, basedir)``. * import the root package as ``root`` - -* determine the fully qualified name for ``fspath`` by either: - - * calling ``root.__pkg__.getimportname(fspath)`` if the - ``__pkg__`` exists.` or - - * otherwise use the relative path of the module path to - the base dir and turn slashes into dots and strike - the trailing ``.py``. - --- a/testing/root/test_api.py +++ /dev/null @@ -1,6 +0,0 @@ - -from py.test import raises -import py -import sys -import inspect - --- a/testing/pytest/test_outcome.py +++ b/testing/pytest/test_outcome.py @@ -56,3 +56,16 @@ def test_importorskip_imports_last_modul ospath = py.test.importorskip("os.path") assert os.path == ospath + +def test_pytest_cmdline_main(testdir): + p = testdir.makepyfile(""" + import sys + sys.path.insert(0, %r) + import py + def test_hello(): + assert 1 + if __name__ == '__main__': + py.test.cmdline.main([__file__]) + """ % (str(py._dir.dirpath()))) + import subprocess + subprocess.check_call([sys.executable, str(p)]) --- a/py/impl/cmdline/pytest.py +++ b/py/impl/cmdline/pytest.py @@ -1,5 +1,5 @@ #!/usr/bin/env python import py -def main(): - py.test.cmdline.main() +def main(args): + py.test.cmdline.main(args) --- a/doc/changelog.txt +++ b/doc/changelog.txt @@ -1,12 +1,13 @@ Changes between 1.1.1 and 1.1.0 ===================================== +- re-introduce py.test.cmdline.main for backward compatibility + - fix a bug with path.check(versioned=True) for svn paths - try harder to have deprecation warnings for py.compat.* accesses report a correct location - Changes between 1.1.0 and 1.0.2 ===================================== From commits-noreply at bitbucket.org Fri Nov 20 00:20:11 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Thu, 19 Nov 2009 23:20:11 +0000 (UTC) Subject: [py-svn] py-trunk commit 7b44b348f3f3: adapt to new execnet.Group code (since execnet-1.0.0b4), strike superflous code Message-ID: <20091119232011.CCC357EEEB@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1258668809 -3600 # Node ID 7b44b348f3f3e31f8f8c5de17e6607523efaf9f6 # Parent b5d1a500603adb9df1e981027be5ebf419f5aa6d adapt to new execnet.Group code (since execnet-1.0.0b4), strike superflous code --- a/testing/plugin/test_pytest_terminal.py +++ b/testing/plugin/test_pytest_terminal.py @@ -130,16 +130,16 @@ class TestTerminal: rep.pytest_gwmanage_newgateway(gw1, rinfo) linecomp.assert_contains_lines([ - "X1*popen*xyz*2.5*" + "*X1*popen*xyz*2.5*" ]) rep.pytest_gwmanage_rsyncstart(source="hello", gateways=[gw1, gw2]) linecomp.assert_contains_lines([ - "rsyncstart: hello -> X1, X2" + "rsyncstart: hello -> [X1], [X2]" ]) rep.pytest_gwmanage_rsyncfinish(source="hello", gateways=[gw1, gw2]) linecomp.assert_contains_lines([ - "rsyncfinish: hello -> X1, X2" + "rsyncfinish: hello -> [X1], [X2]" ]) def test_writeline(self, testdir, linecomp): --- a/py/impl/test/dist/gwmanage.py +++ b/py/impl/test/dist/gwmanage.py @@ -10,9 +10,9 @@ from execnet.gateway_base import RemoteE class GatewayManager: RemoteError = RemoteError def __init__(self, specs, hook, defaultchdir="pyexecnetcache"): - self.gateways = [] self.specs = [] self.hook = hook + self.group = execnet.Group() for spec in specs: if not isinstance(spec, execnet.XSpec): spec = execnet.XSpec(spec) @@ -21,48 +21,19 @@ class GatewayManager: self.specs.append(spec) def makegateways(self): - assert not self.gateways + assert not list(self.group) for spec in self.specs: - gw = execnet.makegateway(spec) - self.gateways.append(gw) - gw.id = "[%s]" % len(self.gateways) + gw = self.group.makegateway(spec) self.hook.pytest_gwmanage_newgateway( gateway=gw, platinfo=gw._rinfo()) - def getgateways(self, remote=True, inplacelocal=True): - if not self.gateways and self.specs: - self.makegateways() - l = [] - for gw in self.gateways: - if gw.spec._samefilesystem(): - if inplacelocal: - l.append(gw) - else: - if remote: - l.append(gw) - return execnet.MultiGateway(gateways=l) - - def multi_exec(self, source, inplacelocal=True): - """ remote execute code on all gateways. - @param inplacelocal=False: don't send code to inplacelocal hosts. - """ - multigw = self.getgateways(inplacelocal=inplacelocal) - return multigw.remote_exec(source) - - def multi_chdir(self, basename, inplacelocal=True): - """ perform a remote chdir to the given path, may be relative. - @param inplacelocal=False: don't send code to inplacelocal hosts. - """ - self.multi_exec("import os ; os.chdir(%r)" % basename, - inplacelocal=inplacelocal).waitclose() - def rsync(self, source, notify=None, verbose=False, ignores=None): """ perform rsync to all remote hosts. """ rsync = HostRSync(source, verbose=verbose, ignores=ignores) seen = py.builtin.set() gateways = [] - for gateway in self.gateways: + for gateway in self.group: spec = gateway.spec if not spec._samefilesystem(): if spec not in seen: @@ -84,9 +55,7 @@ class GatewayManager: ) def exit(self): - while self.gateways: - gw = self.gateways.pop() - gw.exit() + self.group.terminate() class HostRSync(execnet.RSync): """ RSyncer that filters out common files --- a/py/plugin/pytest_terminal.py +++ b/py/plugin/pytest_terminal.py @@ -150,7 +150,7 @@ class TerminalReporter: else: d['extra'] = "" d['cwd'] = platinfo.cwd - infoline = ("%(id)s %(spec)s -- platform %(platform)s, " + infoline = ("[%(id)s] %(spec)s -- platform %(platform)s, " "Python %(version)s " "cwd: %(cwd)s" "%(extra)s" % d) @@ -158,14 +158,14 @@ class TerminalReporter: self.gateway2info[gateway] = infoline def pytest_gwmanage_rsyncstart(self, source, gateways): - targets = ", ".join([gw.id for gw in gateways]) + targets = ", ".join(["[%s]" % gw.id for gw in gateways]) msg = "rsyncstart: %s -> %s" %(source, targets) if not self.config.option.verbose: msg += " # use --verbose to see rsync progress" self.write_line(msg) def pytest_gwmanage_rsyncfinish(self, source, gateways): - targets = ", ".join([gw.id for gw in gateways]) + targets = ", ".join(["[%s]" % gw.id for gw in gateways]) self.write_line("rsyncfinish: %s -> %s" %(source, targets)) def pytest_plugin_registered(self, plugin): @@ -177,11 +177,11 @@ class TerminalReporter: self.write_line(msg) def pytest_testnodeready(self, node): - self.write_line("%s txnode ready to receive tests" %(node.gateway.id,)) + self.write_line("[%s] txnode ready to receive tests" %(node.gateway.id,)) def pytest_testnodedown(self, node, error): if error: - self.write_line("%s node down, error: %s" %(node.gateway.id, error)) + self.write_line("[%s] node down, error: %s" %(node.gateway.id, error)) def pytest_trace(self, category, msg): if self.config.option.debug or \ @@ -203,7 +203,7 @@ class TerminalReporter: line = self._reportinfoline(item) extra = "" if node: - extra = "-> " + str(node.gateway.id) + extra = "-> [%s]" % node.gateway.id self.write_ensure_prefix(line, extra) else: if self.config.option.verbose: @@ -238,7 +238,7 @@ class TerminalReporter: else: self.ensure_newline() if hasattr(rep, 'node'): - self._tw.write("%s " % rep.node.gateway.id) + self._tw.write("[%s] " % rep.node.gateway.id) self._tw.write(word, **markup) self._tw.write(" " + line) self.currentfspath = -2 --- a/py/impl/test/dist/nodemanage.py +++ b/py/impl/test/dist/nodemanage.py @@ -57,7 +57,7 @@ class NodeManager(object): def setup_nodes(self, putevent): self.rsync_roots() self.trace("setting up nodes") - for gateway in self.gwmanager.gateways: + for gateway in self.gwmanager.group: node = TXNode(gateway, self.config, putevent, slaveready=self._slaveready) gateway.node = node # to keep node alive self.trace("started node %r" % node) @@ -67,7 +67,7 @@ class NodeManager(object): #assert node.gateway.node == node self.nodes.append(node) self.trace("%s slave node ready %r" % (node.gateway.id, node)) - if len(self.nodes) == len(self.gwmanager.gateways): + if len(self.nodes) == len(list(self.gwmanager.group)): self._nodesready.set() def wait_nodesready(self, timeout=None): --- a/testing/pytest/dist/test_gwmanage.py +++ b/testing/pytest/dist/test_gwmanage.py @@ -37,26 +37,26 @@ class TestGatewayManagerPopen: hm.makegateways() call = hookrecorder.popcall("pytest_gwmanage_newgateway") assert call.gateway.spec == execnet.XSpec("popen") - assert call.gateway.id == "[1]" + assert call.gateway.id == "1" assert call.platinfo.executable == call.gateway._rinfo().executable call = hookrecorder.popcall("pytest_gwmanage_newgateway") - assert call.gateway.id == "[2]" - assert len(hm.gateways) == 2 + assert call.gateway.id == "2" + assert len(hm.group) == 2 hm.exit() - assert not len(hm.gateways) + assert not len(hm.group) def test_popens_rsync(self, hook, mysetup): source = mysetup.source hm = GatewayManager(["popen"] * 2, hook) hm.makegateways() - assert len(hm.gateways) == 2 - for gw in hm.gateways: + assert len(hm.group) == 2 + for gw in hm.group: gw.remote_exec = None l = [] hm.rsync(source, notify=lambda *args: l.append(args)) assert not l hm.exit() - assert not len(hm.gateways) + assert not len(hm.group) def test_rsync_popen_with_path(self, hook, mysetup): source, dest = mysetup.source, mysetup.dest @@ -66,7 +66,7 @@ class TestGatewayManagerPopen: l = [] hm.rsync(source, notify=lambda *args: l.append(args)) assert len(l) == 1 - assert l[0] == ("rsyncrootready", hm.gateways[0].spec, source) + assert l[0] == ("rsyncrootready", hm.group['1'].spec, source) hm.exit() dest = dest.join(source.basename) assert dest.join("dir1").check() @@ -82,49 +82,9 @@ class TestGatewayManagerPopen: call = hookrecorder.popcall("pytest_gwmanage_rsyncstart") assert call.source == source assert len(call.gateways) == 1 - assert hm.gateways[0] == call.gateways[0] + assert hm.group["1"] == call.gateways[0] call = hookrecorder.popcall("pytest_gwmanage_rsyncfinish") - def test_multi_chdir_popen_with_path(self, hook, testdir): - hm = GatewayManager(["popen//chdir=hello"] * 2, hook) - testdir.tmpdir.chdir() - hellopath = testdir.tmpdir.mkdir("hello").realpath() - hm.makegateways() - l = hm.multi_exec( - "import os ; channel.send(os.getcwd())").receive_each() - paths = [x[1] for x in l] - assert l == [str(hellopath)] * 2 - py.test.raises(hm.RemoteError, - 'hm.multi_chdir("world", inplacelocal=False)') - worldpath = hellopath.mkdir("world") - hm.multi_chdir("world", inplacelocal=False) - l = hm.multi_exec( - "import os ; channel.send(os.getcwd())").receive_each() - assert len(l) == 2 - assert l[0] == l[1] - curwd = os.getcwd() - assert l[0].startswith(curwd) - assert l[0].endswith("world") - - def test_multi_chdir_popen(self, testdir, hook): - import os - hm = GatewayManager(["popen"] * 2, hook) - testdir.tmpdir.chdir() - hellopath = testdir.tmpdir.mkdir("hello") - hm.makegateways() - hm.multi_chdir("hello", inplacelocal=False) - l = hm.multi_exec("import os ; channel.send(os.getcwd())").receive_each() - assert len(l) == 2 - curwd = os.path.realpath(os.getcwd()) - assert l == [curwd] * 2 - - hm.multi_chdir("hello") - l = hm.multi_exec("import os ; channel.send(os.getcwd())").receive_each() - assert len(l) == 2 - assert l[0] == l[1] - assert l[0].startswith(curwd) - assert l[0].endswith("hello") - class pytest_funcarg__mysetup: def __init__(self, request): tmp = request.getfuncargvalue('tmpdir') From commits-noreply at bitbucket.org Fri Nov 20 00:20:13 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Thu, 19 Nov 2009 23:20:13 +0000 (UTC) Subject: [py-svn] py-trunk commit 5c1b2b05f69c: fix compatibility issue with svnwc.update and put CHANGELOG to rootlevel Message-ID: <20091119232013.81F437EEF5@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1258672326 -3600 # Node ID 5c1b2b05f69c82c686de95ab6dc31ee512188008 # Parent 7b44b348f3f3e31f8f8c5de17e6607523efaf9f6 fix compatibility issue with svnwc.update and put CHANGELOG to rootlevel --- a/py/impl/path/svnwc.py +++ b/py/impl/path/svnwc.py @@ -521,9 +521,12 @@ class SvnWCCommandPath(common.PathBase): args.append(url) self._authsvn('co', args) - def update(self, rev='HEAD'): + def update(self, rev='HEAD', interactive=True): """ update working copy item to given revision. (None -> HEAD). """ - self._authsvn('up', ['-r', rev, "--non-interactive"],) + opts = ['-r', rev] + if not interactive: + opts.append("--non-interactive") + self._authsvn('up', opts) def write(self, content, mode='w'): """ write content into local filesystem wc. """ --- a/testing/path/test_svnwc.py +++ b/testing/path/test_svnwc.py @@ -176,7 +176,7 @@ class TestWCSvnCommandPath(CommonSvnTest p.write('bar') wc.commit('wrote some data') wccopy.join('conflictsamplefile').write('baz') - wccopy.update() + wccopy.update(interactive=False) s = wccopy.status() assert [x.basename for x in s.conflict] == ['conflictsamplefile'] From commits-noreply at bitbucket.org Fri Nov 20 09:19:53 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Fri, 20 Nov 2009 08:19:53 +0000 (UTC) Subject: [py-svn] py-trunk commit c3b549bd0904: be a bit more helpful by default regarding --report settings Message-ID: <20091120081953.754D57EEE7@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1258704664 -3600 # Node ID c3b549bd0904a4527f382058de7a6d551157b6c5 # Parent 92a3ec562e0322c1911c5751d567cbb88210aee4 be a bit more helpful by default regarding --report settings --- a/py/plugin/pytest_skipping.py +++ b/py/plugin/pytest_skipping.py @@ -162,10 +162,9 @@ def show_xfailed(terminalreporter): xfailed = tr.stats.get("xfailed") if xfailed: if not tr.hasopt('xfailed'): - if tr.config.getvalue("verbose"): - tr.write_line( - "%d expected failures, use --report=xfailed for more info" % - len(xfailed)) + tr.write_line( + "%d expected failures, use --report=xfailed for more info" % + len(xfailed)) return tr.write_sep("_", "expected failures") for rep in xfailed: @@ -220,10 +219,9 @@ def show_skipped(terminalreporter): skipped = tr.stats.get('skipped', []) if skipped: if not tr.hasopt('skipped'): - if tr.config.getvalue("verbose"): - tr.write_line( - "%d skipped tests, use --report=skipped for more info" % - len(skipped)) + tr.write_line( + "%d skipped tests, use --report=skipped for more info" % + len(skipped)) return fskips = folded_skips(skipped) if fskips: --- a/py/plugin/pytest_terminal.py +++ b/py/plugin/pytest_terminal.py @@ -15,7 +15,7 @@ def pytest_addoption(parser): help="show locals in tracebacks (disabled by default).") group.addoption('--report', action="store", dest="report", default=None, metavar="opts", - help="comma separated reporting options") + help="comma separated options, valid: skipped,xfailed") group._addoption('--tb', metavar="style", action="store", dest="tbstyle", default='long', type="choice", choices=['long', 'short', 'no'], From commits-noreply at bitbucket.org Fri Nov 20 09:19:55 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Fri, 20 Nov 2009 08:19:55 +0000 (UTC) Subject: [py-svn] py-trunk commit d7614a65d219: add % as allowed char, condense CHANGELOG Message-ID: <20091120081955.2E8517EEEB@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1258705169 -3600 # Node ID d7614a65d21983e9e1cd8df199f0d8c7a8f024ea # Parent c3b549bd0904a4527f382058de7a6d551157b6c5 add % as allowed char, condense CHANGELOG --- a/py/impl/path/svnwc.py +++ b/py/impl/path/svnwc.py @@ -77,7 +77,7 @@ repositories = RepoCache() # svn support code -ALLOWED_CHARS = "_ -/\\=$.~+" #add characters as necessary when tested +ALLOWED_CHARS = "_ -/\\=$.~+%" #add characters as necessary when tested if sys.platform == "win32": ALLOWED_CHARS += ":" ALLOWED_CHARS_HOST = ALLOWED_CHARS + '@:' --- a/CHANGELOG +++ b/CHANGELOG @@ -1,14 +1,13 @@ Changes between 1.1.1 and 1.1.0 ===================================== -- fix py.test to work correctly with execnet >= 1.0.0b4 +- fix py.test dist-testing to work with execnet >= 1.0.0b4 (required) -- re-introduce py.test.cmdline.main for better backward compatibility +- re-introduce py.test.cmdline.main() for better backward compatibility -- make svnwc.update() default to interactive mode like in 1.0.x - and add svnwc.update(interactive=False) to inhibit interaction. - -- fix a bug with path.check(versioned=True) for svn paths +- svn paths: fix a bug with path.check(versioned=True) for svn paths, + allow '%' in svn paths, make svnwc.update() default to interactive mode + like in 1.0.x and add svnwc.update(interactive=False) to inhibit interaction. - try harder to have deprecation warnings for py.compat.* accesses report a correct location From commits-noreply at bitbucket.org Tue Nov 24 10:49:53 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Tue, 24 Nov 2009 09:49:53 +0000 (UTC) Subject: [py-svn] py-trunk commit 050edb3ab1a9: some forgotten doc fixes Message-ID: <20091124094953.737177EF11@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1258975420 -3600 # Node ID 050edb3ab1a9568f2bed3bdebaafbbd0c1cb9660 # Parent a77479cc718174d122d679c45761ef19840d7bd8 some forgotten doc fixes --- a/doc/announce/release-1.1.0.txt +++ b/doc/announce/release-1.1.0.txt @@ -4,8 +4,8 @@ py.test/pylib 1.1.0: Python3, Jython, ad Features: * compatible to Python3 (single py2/py3 source), `easy to install`_ +* conditional skipping_: skip/xfail based on platform/dependencies * generalized marking_: mark tests one a whole-class or whole-module basis -* conditional skipping_: skip/xfail based on platform/dependencies Fixes: --- a/doc/execnet.txt +++ b/doc/execnet.txt @@ -2,11 +2,11 @@ py.execnet: *elastic* distributed programming ============================================================================== -Since pylib 1.1 "py.execnet" is separated out of hte lib and now -available through the standalone `execnet standalone package`_. +Since pylib 1.1 "py.execnet" ceased to exist and is now available +as a separately developed `execnet standalone package`_. -If you have usages of the "py.execnet.*" 1.0 API you can likely -rename all occurences of the string ``py.execnet.`` with the -string ``execnet.``. +If you have previosly used "py.execnet.*" and the 1.0 API just +rename all occurences of the string "``py.execnet.``" with the +string "``execnet.``" as execnet-1.0 is API compatible. .. _`execnet standalone package`: http://codespeak.net/execnet From commits-noreply at bitbucket.org Tue Nov 24 10:49:55 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Tue, 24 Nov 2009 09:49:55 +0000 (UTC) Subject: [py-svn] py-trunk commit ba836958eeb8: introduce plugin discovery through setuptools "pytest11" entrypoints Message-ID: <20091124094955.A84EB7EF11@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1258993236 -3600 # Node ID ba836958eeb8e7c94f98571fe0544091e52cdea9 # Parent 050edb3ab1a9568f2bed3bdebaafbbd0c1cb9660 introduce plugin discovery through setuptools "pytest11" entrypoints and refine execnet dependency handling. Prepare 1.1 release --- a/CHANGELOG +++ b/CHANGELOG @@ -1,7 +1,10 @@ Changes between 1.1.1 and 1.1.0 ===================================== -- fix py.test dist-testing to work with execnet >= 1.0.0b4 (required) +- introduce automatic lookup of 'pytest11' entrypoints + via setuptools' pkg_resources.iter_entry_points + +- fix py.test dist-testing to work with execnet >= 1.0.0b4 - re-introduce py.test.cmdline.main() for better backward compatibility --- a/doc/test/customize.txt +++ b/doc/test/customize.txt @@ -125,6 +125,8 @@ Plugin discovery at tool startup py.test loads plugin modules at tool startup in the following way: +* by loading all plugins registered through `setuptools entry points`_. + * by reading the ``PYTEST_PLUGINS`` environment variable and importing the comma-separated list of named plugins. @@ -132,17 +134,13 @@ py.test loads plugin modules at tool sta and loading the specified plugin before actual command line parsing. * by loading all `conftest.py plugin`_ files as inferred by the command line - invocation + invocation (test files and all of its parent directories). + Note that ``conftest.py`` files from sub directories are loaded + during test collection and not at tool startup. * by recursively loading all plugins specified by the ``pytest_plugins`` variable in a ``conftest.py`` file -Note that at tool startup only ``conftest.py`` files in -the directory of the specified test modules (or the current dir if None) -or any of the parent directories are found. There is no try to -pre-scan all subdirectories to find ``conftest.py`` files or test -modules. - Specifying plugins in a test module or plugin ----------------------------------------------- @@ -160,8 +158,8 @@ must be lowercase. .. _`conftest.py plugin`: .. _`conftestplugin`: -conftest.py as anonymous per-project plugins --------------------------------------------------- +Writing per-project plugins (conftest.py) +------------------------------------------------------ The purpose of ``conftest.py`` files is to allow `project-specific test configuration`_. They thus make for a good place to implement @@ -181,6 +179,55 @@ by defining the following hook in a ``co if config.getvalue("runall"): collect_ignore[:] = [] +.. _`setuptools entry points`: + +Writing setuptools-registered plugins +------------------------------------------------------ + +.. _`Distribute`: http://pypi.python.org/pypi/distribute +.. _`setuptools`: http://pypi.python.org/pypi/setuptools + +If you want to make your plugin publically available, you +can use `setuptools`_ or `Distribute`_ which both allow +to register an entry point. ``py.test`` will register +all objects with the ``pytest11`` entry point. +To make your plugin available you may insert the following +lines in your setuptools/distribute-based setup-invocation: + +.. sourcecode:: python + + # sample ./setup.py file + from setuptools import setup + + setup( + name="myproject", + packages = ['myproject'] + + # the following makes a plugin available to py.test + entry_points = { + 'pytest11': [ + 'name_of_plugin = myproject.pluginmodule', + ] + }, + ) + +If a package is installed with this setup, py.test will load +``myproject.pluginmodule`` under the ``name_of_plugin`` name +and use it as a plugin. + +Accessing another plugin by name +-------------------------------------------- + +If a plugin wants to collaborate with code from +another plugin it can obtain a reference through +the plugin manager like this: + +.. sourcecode:: python + + plugin = config.pluginmanager.getplugin("name_of_plugin") + +If you want to look at the names of existing plugins, use +the ``--traceconfig`` option. .. _`well specified hooks`: .. _`implement hooks`: --- a/testing/pytest/test_pluginmanager.py +++ b/testing/pytest/test_pluginmanager.py @@ -42,6 +42,24 @@ class TestBootstrapping: l3 = len(pluginmanager.getplugins()) assert l2 == l3 + def test_consider_setuptools_instantiation(self, monkeypatch): + pkg_resources = py.test.importorskip("pkg_resources") + def my_iter(name): + assert name == "pytest11" + class EntryPoint: + name = "mytestplugin" + def load(self): + class PseudoPlugin: + x = 42 + return PseudoPlugin() + return iter([EntryPoint()]) + + monkeypatch.setattr(pkg_resources, 'iter_entry_points', my_iter) + pluginmanager = PluginManager() + pluginmanager.consider_setuptools_entrypoints() + plugin = pluginmanager.getplugin("mytestplugin") + assert plugin.x == 42 + def test_pluginmanager_ENV_startup(self, testdir, monkeypatch): x500 = testdir.makepyfile(pytest_x500="#") p = testdir.makepyfile(""" --- a/bin-for-dist/test_install.py +++ b/bin-for-dist/test_install.py @@ -88,6 +88,16 @@ class VirtualEnv(object): ] + list(args), **kw) + def pytest_getouterr(self, *args): + self.ensure() + args = [self._cmd("python"), self._cmd("py.test")] + list(args) + popen = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) + out, err = popen.communicate() + return out + + def setup_develop(self): + self.ensure() + return self.pcall("python", "setup.py", "develop") def easy_install(self, *packages, **kw): args = [] @@ -110,4 +120,25 @@ def test_make_sdist_and_run_it(py_setup, ch = gw.remote_exec("import py ; channel.send(py.__version__)") version = ch.receive() assert version == py.__version__ - ch = gw.remote_exec("import py ; channel.send(py.__version__)") + +def test_plugin_setuptools_entry_point_integration(py_setup, venv, tmpdir): + sdist = py_setup.make_sdist(venv.path) + venv.easy_install(str(sdist)) + # create a sample plugin + basedir = tmpdir.mkdir("testplugin") + basedir.join("setup.py").write("""if 1: + from setuptools import setup + setup(name="testplugin", + entry_points = {'pytest11': ['testplugin=tp1']}, + py_modules = ['tp1'], + ) + """) + basedir.join("tp1.py").write(py.code.Source(""" + def pytest_addoption(parser): + parser.addoption("--testpluginopt", action="store_true") + """)) + basedir.chdir() + print ("created sample plugin in %s" %basedir) + venv.setup_develop() + out = venv.pytest_getouterr("-h") + assert "testpluginopt" in out --- a/py/impl/test/config.py +++ b/py/impl/test/config.py @@ -74,6 +74,7 @@ class Config(object): def _preparse(self, args): self._conftest.setinitial(args) + self.pluginmanager.consider_setuptools_entrypoints() self.pluginmanager.consider_preparse(args) self.pluginmanager.consider_env() self.pluginmanager.do_addoption(self._parser) --- a/py/impl/test/pluginmanager.py +++ b/py/impl/test/pluginmanager.py @@ -77,6 +77,14 @@ class PluginManager(object): for spec in self._envlist("PYTEST_PLUGINS"): self.import_plugin(spec) + def consider_setuptools_entrypoints(self): + from pkg_resources import iter_entry_points + for ep in iter_entry_points('pytest11'): + if ep.name in self._name2plugin: + continue + plugin = ep.load() + self.register(plugin, name=ep.name) + def consider_preparse(self, args): for opt1,opt2 in zip(args, args[1:]): if opt1 == "-p": --- a/doc/announce/release-1.1.1.txt +++ b/doc/announce/release-1.1.1.txt @@ -1,10 +1,10 @@ -py.test/pylib 1.1.1: bugfix release, improved 1.0.x backward compat +py.test/pylib 1.1.1: bugfix release, setuptools plugin registration -------------------------------------------------------------------------------- This is a compatibility fixing release of pylib/py.test to work -better with previous 1.0.x code bases. It also contains fixes -and changes to work with `execnet>=1.0.0b4`_ which is now required -(but is not installed automatically, issue "easy_install -U execnet"). +better with previous 1.0.x test code bases. It also contains fixes +and changes to work with `execnet>=1.0.0b4`_. 1.1.1 also introduces +a new mechanism for registering plugins via setuptools. Last but not least, documentation has been improved. What is pylib/py.test? @@ -17,7 +17,7 @@ existing common Python test suites witho it offers some unique features not found in other testing tools. See http://pytest.org for more info. -The pylib contains a localpath and svnpath implementation +The pylib also contains a localpath and svnpath implementation and some developer-oriented command line tools. See http://pylib.org for more info. @@ -31,7 +31,10 @@ holger (http://twitter.com/hpk42) Changes between 1.1.1 and 1.1.0 ===================================== -- fix py.test dist-testing to work with execnet >= 1.0.0b4 (required) +- introduce automatic lookup of 'pytest11' entrypoints + via setuptools' pkg_resources.iter_entry_points + +- fix py.test dist-testing to work with execnet >= 1.0.0b4 - re-introduce py.test.cmdline.main() for better backward compatibility From commits-noreply at bitbucket.org Tue Nov 24 10:49:57 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Tue, 24 Nov 2009 09:49:57 +0000 (UTC) Subject: [py-svn] py-trunk commit 07b3c619aec8: small refinements/precision regarding execnet checks Message-ID: <20091124094957.8E37F7EF19@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1258993546 -3600 # Node ID 07b3c619aec8b335c8345f256c7179243d740e24 # Parent ba836958eeb8e7c94f98571fe0544091e52cdea9 small refinements/precision regarding execnet checks --- a/py/impl/test/looponfail/remote.py +++ b/py/impl/test/looponfail/remote.py @@ -54,7 +54,7 @@ class RemoteControl(object): py.builtin.print_("RemoteControl:", msg) def initgateway(self): - return execnet.PopenGateway() + return execnet.makegateway("popen") def setup(self, out=None): if out is None: --- a/py/plugin/pytest_default.py +++ b/py/plugin/pytest_default.py @@ -7,6 +7,9 @@ try: import execnet except ImportError: execnet = None +else: + if not hasattr(execnet, 'Group'): + execnet = None def pytest_pyfunc_call(__multicall__, pyfuncitem): if not __multicall__.execute(): @@ -70,7 +73,7 @@ def pytest_addoption(parser): add_dist_options(parser) else: parser.epilog = ( - "'execnet' package required for --looponfailing / distributed testing.") + "'execnet>=1.0.0b4' package required for --looponfailing / distributed testing.") def add_dist_options(parser): # see http://pytest.org/help/dist") From commits-noreply at bitbucket.org Tue Nov 24 10:49:49 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Tue, 24 Nov 2009 09:49:49 +0000 (UTC) Subject: [py-svn] py-trunk commit 8a8f2ba6f266: fix a flaky test Message-ID: <20091124094949.3513D7EF14@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1258707880 -3600 # Node ID 8a8f2ba6f266f6ea9a555621df711122d048af1c # Parent d7614a65d21983e9e1cd8df199f0d8c7a8f024ea fix a flaky test --- a/testing/pytest/dist/test_gwmanage.py +++ b/testing/pytest/dist/test_gwmanage.py @@ -82,7 +82,7 @@ class TestGatewayManagerPopen: call = hookrecorder.popcall("pytest_gwmanage_rsyncstart") assert call.source == source assert len(call.gateways) == 1 - assert hm.group["1"] == call.gateways[0] + assert call.gateways[0] in hm.group call = hookrecorder.popcall("pytest_gwmanage_rsyncfinish") class pytest_funcarg__mysetup: From commits-noreply at bitbucket.org Tue Nov 24 10:49:51 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Tue, 24 Nov 2009 09:49:51 +0000 (UTC) Subject: [py-svn] py-trunk commit a77479cc7181: fixing docs, adding draft announcement Message-ID: <20091124094951.9A1097EF15@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1258708669 -3600 # Node ID a77479cc718174d122d679c45761ef19840d7bd8 # Parent 8a8f2ba6f266f6ea9a555621df711122d048af1c fixing docs, adding draft announcement --- a/doc/test/dist.txt +++ b/doc/test/dist.txt @@ -12,9 +12,9 @@ are reported back and displayed to your specify different Python versions and interpreters. **Requirements**: you need to install the `execnet`_ package -to perform distributed test runs. +(at least version 1.0.0b4) to perform distributed test runs. -**NOTE**: Version 1.1.0 is not able to distribute tests across Python3/Python2 barriers. +**NOTE**: Version 1.1.x is not able to distribute tests across Python3/Python2 barriers. Speed up test runs by sending tests to multiple CPUs ---------------------------------------------------------- --- /dev/null +++ b/doc/announce/release-1.1.1.txt @@ -0,0 +1,44 @@ +py.test/pylib 1.1.1: bugfix release, improved 1.0.x backward compat +-------------------------------------------------------------------------------- + +This is a compatibility fixing release of pylib/py.test to work +better with previous 1.0.x code bases. It also contains fixes +and changes to work with `execnet>=1.0.0b4`_ which is now required +(but is not installed automatically, issue "easy_install -U execnet"). +Last but not least, documentation has been improved. + +What is pylib/py.test? +----------------------- + +py.test is an advanced automated testing tool working with +Python2, Python3 and Jython versions on all major operating +systems. It has an extensive plugin architecture and can run many +existing common Python test suites without modification. Moreover, +it offers some unique features not found in other +testing tools. See http://pytest.org for more info. + +The pylib contains a localpath and svnpath implementation +and some developer-oriented command line tools. See +http://pylib.org for more info. + +thanks to all who helped and gave feedback, +have fun, + +holger (http://twitter.com/hpk42) + +.. _`execnet>=1.0.0b4`: http://codespeak.net/execnet + +Changes between 1.1.1 and 1.1.0 +===================================== + +- fix py.test dist-testing to work with execnet >= 1.0.0b4 (required) + +- re-introduce py.test.cmdline.main() for better backward compatibility + +- svn paths: fix a bug with path.check(versioned=True) for svn paths, + allow '%' in svn paths, make svnwc.update() default to interactive mode + like in 1.0.x and add svnwc.update(interactive=False) to inhibit interaction. + +- try harder to have deprecation warnings for py.compat.* accesses + report a correct location + From commits-noreply at bitbucket.org Tue Nov 24 10:49:57 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Tue, 24 Nov 2009 09:49:57 +0000 (UTC) Subject: [py-svn] py-trunk commit 77705f6e09a7: don't consider setuptools plugins if it is not installed. Message-ID: <20091124094957.9EAE17EF11@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1259056144 -3600 # Node ID 77705f6e09a75cd3199390d3bb3f0000d1560ddb # Parent 07b3c619aec8b335c8345f256c7179243d740e24 don't consider setuptools plugins if it is not installed. --- a/testing/pytest/test_pluginmanager.py +++ b/testing/pytest/test_pluginmanager.py @@ -60,6 +60,13 @@ class TestBootstrapping: plugin = pluginmanager.getplugin("mytestplugin") assert plugin.x == 42 + def test_consider_setuptools_not_installed(self, monkeypatch): + monkeypatch.setitem(py.std.sys.modules, 'pkg_resources', + py.std.types.ModuleType("pkg_resources")) + pluginmanager = PluginManager() + pluginmanager.consider_setuptools_entrypoints() + # ok, we did not explode + def test_pluginmanager_ENV_startup(self, testdir, monkeypatch): x500 = testdir.makepyfile(pytest_x500="#") p = testdir.makepyfile(""" --- a/py/impl/test/pluginmanager.py +++ b/py/impl/test/pluginmanager.py @@ -78,7 +78,10 @@ class PluginManager(object): self.import_plugin(spec) def consider_setuptools_entrypoints(self): - from pkg_resources import iter_entry_points + try: + from pkg_resources import iter_entry_points + except ImportError: + return # XXX issue a warning for ep in iter_entry_points('pytest11'): if ep.name in self._name2plugin: continue From commits-noreply at bitbucket.org Tue Nov 24 14:47:14 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Tue, 24 Nov 2009 13:47:14 +0000 (UTC) Subject: [py-svn] py-trunk commit 1c31b7e6dde2: by default flush log writes to files Message-ID: <20091124134714.08A9B7EF01@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1259059693 28800 # Node ID 1c31b7e6dde204243118244ee1f108912d37cb12 # Parent 77705f6e09a75cd3199390d3bb3f0000d1560ddb by default flush log writes to files --- a/py/impl/log/log.py +++ b/py/impl/log/log.py @@ -132,6 +132,8 @@ class File(object): def __call__(self, msg): """ write a message to the log """ self._file.write(str(msg) + "\n") + if hasattr(self._file, 'flush'): + self._file.flush() class Path(object): """ log consumer that opens and writes to a Path """ From commits-noreply at bitbucket.org Tue Nov 24 18:04:46 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Tue, 24 Nov 2009 17:04:46 +0000 (UTC) Subject: [py-svn] py-trunk commit 7cb933fe5ced: Added tag 1.1.1 for changeset 319187fcda66 Message-ID: <20091124170446.E52F57EF3E@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1259082122 -3600 # Node ID 7cb933fe5ced7962a60124de3b93aefd389d4b31 # Parent 319187fcda66714c5eb1353492babeec3d3c826f Added tag 1.1.1 for changeset 319187fcda66 --- a/.hgtags +++ b/.hgtags @@ -19,3 +19,4 @@ 7acde360d94b6a2690ce3d03ff39301da84c0a2b 6bd221981ac99103002c1cb94fede400d23a96a1 1.0.1 4816e8b80602a3fd3a0a120333ad85fbe7d8bab4 1.0.2 60c44bdbf093285dc69d5462d4dbb4acad325ca6 1.1.0 +319187fcda66714c5eb1353492babeec3d3c826f 1.1.1 From commits-noreply at bitbucket.org Tue Nov 24 18:04:46 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Tue, 24 Nov 2009 17:04:46 +0000 (UTC) Subject: [py-svn] py-trunk commit 319187fcda66: adjustments and fixes to test run, distribution files. thanks thm. Message-ID: <20091124170446.AF1AF7EF3C@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1259072218 -3600 # Node ID 319187fcda66714c5eb1353492babeec3d3c826f # Parent 1c31b7e6dde204243118244ee1f108912d37cb12 adjustments and fixes to test run, distribution files. thanks thm. --- a/testing/plugin/test_pytest_restdoc.py +++ b/testing/plugin/test_pytest_restdoc.py @@ -47,6 +47,9 @@ class TestDoctest: assert request.module.__name__ == __name__ testdir.makepyfile(confrest= "from py.plugin.pytest_restdoc import Project") + # we scope our confrest file so that it doesn't + # conflict with another global confrest.py + testdir.makepyfile(__init__="") for p in testdir.plugins: if p == globals(): break --- a/doc/example/assertion/global_testmodule_config/conftest.py +++ b/doc/example/assertion/global_testmodule_config/conftest.py @@ -1,7 +1,10 @@ import py +mydir = py.path.local(__file__).dirpath() def pytest_runtest_setup(item): if isinstance(item, py.test.collect.Function): + if not item.fspath.relto(mydir): + return mod = item.getparent(py.test.collect.Module).obj if hasattr(mod, 'hello'): py.builtin.print_("mod.hello", mod.hello) --- a/CHANGELOG +++ b/CHANGELOG @@ -1,8 +1,8 @@ Changes between 1.1.1 and 1.1.0 ===================================== -- introduce automatic lookup of 'pytest11' entrypoints - via setuptools' pkg_resources.iter_entry_points +- introduce automatic plugin registration via 'pytest11' + entrypoints via setuptools' pkg_resources.iter_entry_points - fix py.test dist-testing to work with execnet >= 1.0.0b4 @@ -12,6 +12,8 @@ Changes between 1.1.1 and 1.1.0 allow '%' in svn paths, make svnwc.update() default to interactive mode like in 1.0.x and add svnwc.update(interactive=False) to inhibit interaction. +- refine distributed tarball to contain test and no pyc files + - try harder to have deprecation warnings for py.compat.* accesses report a correct location --- a/testing/plugin/test_pytest_terminal.py +++ b/testing/plugin/test_pytest_terminal.py @@ -374,7 +374,6 @@ class TestCollectonly: p = testdir.makepyfile("import Errlkjqweqwe") result = testdir.runpytest("--collectonly", p) stderr = result.stderr.str().strip() - assert stderr.startswith("inserting into sys.path") assert result.ret == 1 extra = result.stdout.fnmatch_lines(py.code.Source(""" --- a/MANIFEST.in +++ b/MANIFEST.in @@ -3,11 +3,16 @@ include README.txt include setup.py include distribute_setup.py include LICENSE +include conftest.py graft doc graft contrib graft bin +graft testing exclude *.orig exclude *.rej exclude .hginore +exclude *.pyc +recursive-exclude testing *.pyc *.orig *.rej *$py.class +prune .pyc prune .svn prune .hg --- a/py/plugin/pytest_pytester.py +++ b/py/plugin/pytest_pytester.py @@ -319,7 +319,7 @@ class TmpTestdir: return self.runpybin("py.test", *args) def spawn_pytest(self, string, expect_timeout=10.0): - pexpect = py.test.importorskip("pexpect", "2.3") + pexpect = py.test.importorskip("pexpect", "2.4") basetemp = self.tmpdir.mkdir("pexpect") invoke = "%s %s" % self._getpybinargs("py.test") cmd = "%s --basetemp=%s %s" % (invoke, basetemp, string) --- a/testing/path/test_svnauth.py +++ b/testing/path/test_svnauth.py @@ -4,6 +4,8 @@ from py.path import SvnAuth import time import sys +svnbin = py.path.local.sysfind('svn') + def make_repo_auth(repo, userdata): """ write config to repo @@ -257,6 +259,8 @@ class TestSvnURLAuth(object): class pytest_funcarg__setup: def __init__(self, request): + if not svnbin: + py.test.skip("svn binary required") if not request.config.option.runslowtests: py.test.skip('use --runslowtests to run these tests') --- a/doc/announce/release-1.1.1.txt +++ b/doc/announce/release-1.1.1.txt @@ -3,9 +3,9 @@ py.test/pylib 1.1.1: bugfix release, set This is a compatibility fixing release of pylib/py.test to work better with previous 1.0.x test code bases. It also contains fixes -and changes to work with `execnet>=1.0.0b4`_. 1.1.1 also introduces +and changes to work with `execnet>=1.0.0`_ to provide distributed +testing and looponfailing testing modes. py-1.1.1 also introduces a new mechanism for registering plugins via setuptools. -Last but not least, documentation has been improved. What is pylib/py.test? ----------------------- @@ -26,13 +26,13 @@ have fun, holger (http://twitter.com/hpk42) -.. _`execnet>=1.0.0b4`: http://codespeak.net/execnet +.. _`execnet>=1.0.0`: http://codespeak.net/execnet Changes between 1.1.1 and 1.1.0 ===================================== -- introduce automatic lookup of 'pytest11' entrypoints - via setuptools' pkg_resources.iter_entry_points +- introduce automatic plugin registration via 'pytest11' + entrypoints via setuptools' pkg_resources.iter_entry_points - fix py.test dist-testing to work with execnet >= 1.0.0b4 @@ -42,6 +42,7 @@ Changes between 1.1.1 and 1.1.0 allow '%' in svn paths, make svnwc.update() default to interactive mode like in 1.0.x and add svnwc.update(interactive=False) to inhibit interaction. +- refine distributed tarball to contain test and no pyc files + - try harder to have deprecation warnings for py.compat.* accesses report a correct location - From commits-noreply at bitbucket.org Fri Nov 27 20:46:37 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Fri, 27 Nov 2009 19:46:37 +0000 (UTC) Subject: [py-svn] py-trunk commit 1ee4a66511d2: starting an ISSUES.txt with a conftest issue Message-ID: <20091127194637.C88A97EF52@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1259175039 -3600 # Node ID 1ee4a66511d2eb065198a9a3a7d2f23e0cf6c218 # Parent 7cb933fe5ced7962a60124de3b93aefd389d4b31 starting an ISSUES.txt with a conftest issue --- /dev/null +++ b/ISSUES.txt @@ -0,0 +1,18 @@ + +consider conftest hooks only for items below the dir +--------------------------------------------------------- +tags: bug + +currently conftest hooks remain registered throughout +the whole testing process. Consider to only have them +called if their filesystem location is below a test item. + + +introduce py.test.mark.ignoretest +------------------------------------------------------- +tags: feature + +for not considering a function for test collection at all. +maybe also introduce a py.test.mark.test to explicitely +mark a function to become a tested one. Lookup +Java JUnit recent strategies/syntax. From commits-noreply at bitbucket.org Fri Nov 27 20:46:39 2009 From: commits-noreply at bitbucket.org (commits-noreply at bitbucket.org) Date: Fri, 27 Nov 2009 19:46:39 +0000 (UTC) Subject: [py-svn] py-trunk commit f29d1d8a6384: fix keyword calling Message-ID: <20091127194639.9F1537EF54@bitbucket.org> # HG changeset patch -- Bitbucket.org # Project py-trunk # URL http://bitbucket.org/hpk42/py-trunk/overview/ # User holger krekel # Date 1259350341 -3600 # Node ID f29d1d8a63849e6d6fcb70593c671ed07f69842b # Parent 1ee4a66511d2eb065198a9a3a7d2f23e0cf6c218 fix keyword calling --- a/testing/code/test_assertion.py +++ b/testing/code/test_assertion.py @@ -93,6 +93,14 @@ def test_assert_non_string_message(): e = exvalue() assert e.msg == "hello" +def test_assert_keyword_arg(): + def f(x=3): + return False + try: + assert f(x=5) + except AssertionError: + e = exvalue() + assert "x=5" in e.msg # These tests should both fail, but should fail nicely... class WeirdRepr: --- a/py/impl/code/_assertionnew.py +++ b/py/impl/code/_assertionnew.py @@ -234,7 +234,7 @@ class DebugInterpreter(ast.NodeVisitor): arg_explanation, arg_result = self.visit(keyword.value) arg_name = "__exprinfo_%s" % (len(ns),) ns[arg_name] = arg_result - keyword_source = "%s=%%s" % (keyword.id) + keyword_source = "%s=%%s" % (keyword.arg) arguments.append(keyword_source % (arg_name,)) arg_explanations.append(keyword_source % (arg_explanation,)) if call.starargs: --- a/CHANGELOG +++ b/CHANGELOG @@ -1,3 +1,8 @@ +Changes between 1.1.2 and 1.1.1 +===================================== + +- fix assert reinterpreation that sees a call containing "keyword=..." + Changes between 1.1.1 and 1.1.0 =====================================