What to make of 'make test' for python3 install from source (beaglebone angstrom install).

Ned Deily nad at acm.org
Thu Nov 7 22:18:17 EST 2013


In article <F04D0E1B-E7CB-4E2B-BB26-A8BD3A13E2B5 at gmail.com>,
 Travis Griggs <travisgriggs at gmail.com> wrote:
> One part of the recommended install is to 'make test'. In a perfect world, I 
> guess everything would pass. Since I'm running an embedded linux, on an arm 
> processor, I kind of expect some issues. As the tests run, I see that there 
> are indeed some errors here and there. But I don't see where they get 
> summarized or anything. I guess I can try to capture the output and grep 
> through it. I'm curious how people use the make install. Looking to bootstrap 
> off of other's experience, if any has some willing to share.

When you run "make test", you are running the built-in test.regrtest test 
runner.  You can look at the Makefile to see exactly what "test" does.  You 
can also run tests directly. At the end of the regrtest run, there should be a 
summary of tests failed and tests skipped.

./python -m test -w -uall
== CPython 3.4.0a4 (v3.4.0a4:e245b0d7209b, Oct 20 2013, 02:43:50) [GCC 4.2.1 
(Apple Inc. build 5666) (dot 3)]
==   Darwin-12.5.0-x86_64-i386-64bit little-endian
==   /private/var/folders/fm/9wjgctqx61n796zt88qmmnxc0000gn/T/test_python_42707
Testing with flags: sys.flags(debug=0, inspect=0, interactive=0, optimize=0, 
dont_write_bytecode=0, no_user_site=0, no_site=0, ignore_environment=0, 
verbose=0, bytes_warning=0, quiet=0, hash_randomization=1, isolated=0)
[  1/383] test_grammar
[  2/383] test_opcodes
  [...]
[382/383/1] test_zipimport_support
[383/383/1] test_zlib
369 tests OK.
1 test failed:
    test_pydoc
1 test altered the execution environment:
    test_site
12 tests skipped:
    test_dbm_gnu test_devpoll test_epoll test_gdb test_idle
    test_msilib test_ossaudiodev test_startfile test_tools test_winreg
    test_winsound test_zipfile64

In a perfect world, all the tests would pass on each supported platform, but 
there are significant differences in what is supported by the OS on each and 
what third-party libraries are available on your system, so the test run 
results are going to vary across environments.  What you should be most 
concerned about are pure Python failures, like the test_pydoc failure above.  
You may have to go and take a look at the tests to get an idea of what the 
test is really doing and whether it is significant in your environment.  For a 
new platform, establish a baseline, e.g. what are the normal results in your 
environment, and then use the tests to find regressions as you make changes or 
pull in upstream updates.  You can also get more information about regrtest 
options:

./python -m test -h

(Note, for Python 2.7, replace "test" by "test.regrtest").

-- 
 Ned Deily,
 nad at acm.org




More information about the Python-list mailing list