[Numpy-svn] r6188 - in branches/dynamic_cpu_configuration: . doc doc/release doc/source doc/source/_templates doc/source/reference doc/sphinxext doc/sphinxext/tests numpy/core numpy/core/code_generators numpy/core/src numpy/core/tests numpy/distutils/fcompiler numpy/f2py numpy/lib numpy/lib/tests numpy/ma numpy/ma/tests tools/win32build/nsis_scripts

numpy-svn at scipy.org numpy-svn at scipy.org
Mon Dec 22 08:24:31 EST 2008


Author: cdavid
Date: 2008-12-22 07:23:03 -0600 (Mon, 22 Dec 2008)
New Revision: 6188

Added:
   branches/dynamic_cpu_configuration/doc/release/1.3.0-notes.rst
   branches/dynamic_cpu_configuration/doc/source/release.rst
   branches/dynamic_cpu_configuration/doc/sphinxext/
   branches/dynamic_cpu_configuration/doc/sphinxext/LICENSE.txt
   branches/dynamic_cpu_configuration/doc/sphinxext/__init__.py
   branches/dynamic_cpu_configuration/doc/sphinxext/autosummary.py
   branches/dynamic_cpu_configuration/doc/sphinxext/autosummary_generate.py
   branches/dynamic_cpu_configuration/doc/sphinxext/comment_eater.py
   branches/dynamic_cpu_configuration/doc/sphinxext/compiler_unparse.py
   branches/dynamic_cpu_configuration/doc/sphinxext/docscrape.py
   branches/dynamic_cpu_configuration/doc/sphinxext/docscrape_sphinx.py
   branches/dynamic_cpu_configuration/doc/sphinxext/numpydoc.py
   branches/dynamic_cpu_configuration/doc/sphinxext/only_directives.py
   branches/dynamic_cpu_configuration/doc/sphinxext/phantom_import.py
   branches/dynamic_cpu_configuration/doc/sphinxext/plot_directive.py
   branches/dynamic_cpu_configuration/doc/sphinxext/tests/
   branches/dynamic_cpu_configuration/doc/sphinxext/tests/test_docscrape.py
   branches/dynamic_cpu_configuration/doc/sphinxext/traitsdoc.py
   branches/dynamic_cpu_configuration/numpy/core/code_generators/ufunc_docstrings.py
Removed:
   branches/dynamic_cpu_configuration/doc/sphinxext/LICENSE.txt
   branches/dynamic_cpu_configuration/doc/sphinxext/__init__.py
   branches/dynamic_cpu_configuration/doc/sphinxext/autosummary.py
   branches/dynamic_cpu_configuration/doc/sphinxext/autosummary_generate.py
   branches/dynamic_cpu_configuration/doc/sphinxext/comment_eater.py
   branches/dynamic_cpu_configuration/doc/sphinxext/compiler_unparse.py
   branches/dynamic_cpu_configuration/doc/sphinxext/docscrape.py
   branches/dynamic_cpu_configuration/doc/sphinxext/docscrape_sphinx.py
   branches/dynamic_cpu_configuration/doc/sphinxext/numpydoc.py
   branches/dynamic_cpu_configuration/doc/sphinxext/only_directives.py
   branches/dynamic_cpu_configuration/doc/sphinxext/phantom_import.py
   branches/dynamic_cpu_configuration/doc/sphinxext/plot_directive.py
   branches/dynamic_cpu_configuration/doc/sphinxext/tests/
   branches/dynamic_cpu_configuration/doc/sphinxext/tests/test_docscrape.py
   branches/dynamic_cpu_configuration/doc/sphinxext/traitsdoc.py
   branches/dynamic_cpu_configuration/numpy/core/code_generators/docstrings.py
Modified:
   branches/dynamic_cpu_configuration/
   branches/dynamic_cpu_configuration/THANKS.txt
   branches/dynamic_cpu_configuration/doc/Makefile
   branches/dynamic_cpu_configuration/doc/source/_templates/indexcontent.html
   branches/dynamic_cpu_configuration/doc/source/conf.py
   branches/dynamic_cpu_configuration/doc/source/contents.rst
   branches/dynamic_cpu_configuration/doc/source/reference/routines.emath.rst
   branches/dynamic_cpu_configuration/doc/source/reference/routines.ma.rst
   branches/dynamic_cpu_configuration/doc/source/reference/routines.matlib.rst
   branches/dynamic_cpu_configuration/doc/source/reference/routines.numarray.rst
   branches/dynamic_cpu_configuration/doc/source/reference/routines.oldnumeric.rst
   branches/dynamic_cpu_configuration/doc/source/reference/ufuncs.rst
   branches/dynamic_cpu_configuration/doc/summarize.py
   branches/dynamic_cpu_configuration/numpy/core/SConscript
   branches/dynamic_cpu_configuration/numpy/core/code_generators/generate_umath.py
   branches/dynamic_cpu_configuration/numpy/core/src/umath_funcs_c99.inc.src
   branches/dynamic_cpu_configuration/numpy/core/tests/test_umath.py
   branches/dynamic_cpu_configuration/numpy/distutils/fcompiler/gnu.py
   branches/dynamic_cpu_configuration/numpy/f2py/crackfortran.py
   branches/dynamic_cpu_configuration/numpy/lib/format.py
   branches/dynamic_cpu_configuration/numpy/lib/io.py
   branches/dynamic_cpu_configuration/numpy/lib/polynomial.py
   branches/dynamic_cpu_configuration/numpy/lib/tests/test_io.py
   branches/dynamic_cpu_configuration/numpy/ma/core.py
   branches/dynamic_cpu_configuration/numpy/ma/extras.py
   branches/dynamic_cpu_configuration/numpy/ma/tests/test_core.py
   branches/dynamic_cpu_configuration/numpy/ma/tests/test_extras.py
   branches/dynamic_cpu_configuration/numpy/ma/testutils.py
   branches/dynamic_cpu_configuration/setup.py
   branches/dynamic_cpu_configuration/tools/win32build/nsis_scripts/numpy-superinstaller.nsi.in
Log:
Merged revisions 6108,6110,6112-6127,6129-6134,6136-6138,6140-6149,6174-6175,6179-6182,6185-6187 via svnmerge from 
http://svn.scipy.org/svn/numpy/trunk

........
  r6108 | pierregm | 2008-11-26 11:13:57 +0900 (Wed, 26 Nov 2008) | 3 lines
  
  * added ma.diag
  * added copy, cumprod, cumsum, harden_mask, prod, round, soften_mask, squeeze to the namespace
  * TEMPORARILY fixed a pb of compatibility with python 2.6 (involvingin(np.nan))
........
  r6110 | pierregm | 2008-11-27 13:29:43 +0900 (Thu, 27 Nov 2008) | 5 lines
  
  * Added get_object_signature to fix missing signatures
  * Fixed .getdoc from _arraymethod, _frommethod, _convert2ma, _fromnxfunction
  * Fixed the docstrings of .trace, .mean, .argsort, .sort
  * Suppressed duplicated conjugate, ptp, round, expand_dims, apply_along_axis, compress_rowcols, mask_rowcols, vander, polyfit
........
  r6112 | pierregm | 2008-11-27 15:56:12 +0900 (Thu, 27 Nov 2008) | 1 line
  
  Doc update
........
  r6113 | jarrod.millman | 2008-11-27 19:58:51 +0900 (Thu, 27 Nov 2008) | 2 lines
  
  add release notes for 1.3
........
  r6114 | ptvirtan | 2008-11-28 05:26:04 +0900 (Fri, 28 Nov 2008) | 1 line
  
  doc: include release notes to Sphinx build
........
  r6115 | charris | 2008-11-28 12:52:16 +0900 (Fri, 28 Nov 2008) | 2 lines
  
  Make numpy version of atanh more robust.
  Numpy log1p still needs a major overhaul.
........
  r6116 | charris | 2008-11-28 14:34:33 +0900 (Fri, 28 Nov 2008) | 5 lines
  
  Add preliminary docstrings for:
  log2, exp2, logaddexp, logaddexp2, rad2deg, deg2rad.
  
  The complete docstring for fmin and fmax are on the web but
  haven't yet been merged.
........
  r6117 | cdavid | 2008-11-29 01:47:34 +0900 (Sat, 29 Nov 2008) | 1 line
  
  Fix typo in core scons script.
........
  r6118 | cdavid | 2008-11-29 01:50:08 +0900 (Sat, 29 Nov 2008) | 1 line
  
  Anoter typo on core scons script.
........
  r6119 | stefan | 2008-11-29 21:07:07 +0900 (Sat, 29 Nov 2008) | 1 line
  
  Add memory map support to `load` [patch by Gael Varoquaux].  Closes #954.
........
  r6120 | stefan | 2008-11-29 21:07:54 +0900 (Sat, 29 Nov 2008) | 1 line
  
  Add test for load's mmap_mode.
........
  r6121 | stefan | 2008-11-29 21:08:29 +0900 (Sat, 29 Nov 2008) | 1 line
  
  Opening a memmap requires a filename.  Raise an error otherwise.
........
  r6122 | stefan | 2008-11-29 21:09:07 +0900 (Sat, 29 Nov 2008) | 1 line
  
  Reformat spacing in io tests.
........
  r6123 | stefan | 2008-11-29 23:53:44 +0900 (Sat, 29 Nov 2008) | 1 line
  
  Identify file object using 'readline', rather than 'seek'.
........
  r6124 | stefan | 2008-11-29 23:54:29 +0900 (Sat, 29 Nov 2008) | 1 line
  
  Add bz2 support to loadtxt [patch by Ryan May].
........
  r6125 | ptvirtan | 2008-11-30 23:44:38 +0900 (Sun, 30 Nov 2008) | 1 line
  
  Move Sphinx extensions under Numpy's SVN trunk
........
  r6126 | ptvirtan | 2008-12-01 00:08:38 +0900 (Mon, 01 Dec 2008) | 1 line
  
  Rename core/.../docstrings.py to ufunc_docstrings.py
........
  r6127 | pierregm | 2008-12-01 18:45:51 +0900 (Mon, 01 Dec 2008) | 1 line
  
  Fixed make_mask_descr for nested dtypes
........
  r6129 | pierregm | 2008-12-02 02:56:58 +0900 (Tue, 02 Dec 2008) | 2 lines
  
  * added flatten_mask to collapse masks w/ (nested) flexible types.
  * fixed __getitem__ on arrays w/ nested dtype
........
  r6130 | pierregm | 2008-12-02 11:40:22 +0900 (Tue, 02 Dec 2008) | 3 lines
  
  * Fixed MaskedArray for nested dtype w/ input mask
  * Fixed masked_all for nested dtype
  * Fixed masked_all_like for nested dtype
........
  r6131 | pierregm | 2008-12-02 17:50:11 +0900 (Tue, 02 Dec 2008) | 1 line
  
  * Fixed make_mask_descr for dtype w/ composite names, like [(('A','B'), float)]
........
  r6132 | pierregm | 2008-12-03 03:42:12 +0900 (Wed, 03 Dec 2008) | 1 line
  
  * Cleaned up make_mask_descr 
........
  r6133 | ptvirtan | 2008-12-04 06:52:36 +0900 (Thu, 04 Dec 2008) | 1 line
  
  Refactor plot:: directive somewhat
........
  r6134 | ptvirtan | 2008-12-04 07:15:51 +0900 (Thu, 04 Dec 2008) | 1 line
  
  sphinxext: fix a small bug in plot directive
........
  r6136 | cdavid | 2008-12-04 12:21:51 +0900 (Thu, 04 Dec 2008) | 1 line
  
  Add /arch option to superpack installer to override detected arch.
........
  r6137 | ptvirtan | 2008-12-05 08:06:29 +0900 (Fri, 05 Dec 2008) | 1 line
  
  sphinxext: support autosummary:: directives in automodule docstrings
........
  r6138 | pierregm | 2008-12-06 05:40:44 +0900 (Sat, 06 Dec 2008) | 2 lines
  
  * Added MaskError
  * If a bool or int ndarray is given as the explicit output of var/min/max, an exception is raised if the function should have output np.nan
........
  r6140 | ptvirtan | 2008-12-14 01:18:04 +0900 (Sun, 14 Dec 2008) | 1 line
  
  Get lstsq and eigvals from numpy.linalg, not from numpy.dual. Addresses Scipy ticket #800
........
  r6141 | ptvirtan | 2008-12-14 06:02:05 +0900 (Sun, 14 Dec 2008) | 12 lines
  
  docs: fix minor issues, support htmlhelp.
  
  - Don't use :members: in automodule; it generates too much
    and not very useful output
  
  - Fix edit links and summarize.py
  
  - Add better htmlhelp build target
  
  - Add upload target
  
  - Fix permissions on make dist
........
  r6142 | jarrod.millman | 2008-12-14 19:32:51 +0900 (Sun, 14 Dec 2008) | 2 lines
  
  wordsmithing
........
  r6143 | jarrod.millman | 2008-12-16 20:21:52 +0900 (Tue, 16 Dec 2008) | 2 lines
  
  added missing THANKS for Alan's testing work this summer
........
  r6144 | cdavid | 2008-12-17 03:04:24 +0900 (Wed, 17 Dec 2008) | 1 line
  
  BUG: Do not harcode fortran runtime when copying it on windows. Should fix #969.
........
  r6145 | cdavid | 2008-12-17 03:26:13 +0900 (Wed, 17 Dec 2008) | 1 line
  
  Add a function to get configured target for gfortran.
........
  r6146 | cdavid | 2008-12-17 03:32:41 +0900 (Wed, 17 Dec 2008) | 1 line
  
  Fix get_target.
........
  r6147 | cdavid | 2008-12-17 03:41:32 +0900 (Wed, 17 Dec 2008) | 1 line
  
  Add target specific lib dir for gfortran on windows when msvc is the C compiler.
........
  r6148 | cdavid | 2008-12-17 03:48:37 +0900 (Wed, 17 Dec 2008) | 1 line
  
  Fix overriding of library_dirs.
........
  r6149 | cdavid | 2008-12-17 03:53:25 +0900 (Wed, 17 Dec 2008) | 1 line
  
  Add mingw32 and mingwex libraries as runtime libraries for extensions which use fortran and are built with gfortran+MS compiler.
........
  r6174 | ptvirtan | 2008-12-20 02:58:57 +0900 (Sat, 20 Dec 2008) | 1 line
  
  docs: put CHM files in a zip
........
  r6175 | ptvirtan | 2008-12-20 22:40:30 +0900 (Sat, 20 Dec 2008) | 1 line
  
  test_umath: don't check against cmath on branch cuts, since the behavior of our functions varies across platforms on them
........
  r6179 | cdavid | 2008-12-21 15:02:29 +0900 (Sun, 21 Dec 2008) | 1 line
  
  Do not declare missing functions to avoid mismatch with potentially conflicting, undetected ones
........
  r6180 | cdavid | 2008-12-21 15:02:44 +0900 (Sun, 21 Dec 2008) | 1 line
  
  Update comments in umath.
........
  r6181 | cdavid | 2008-12-21 15:03:05 +0900 (Sun, 21 Dec 2008) | 1 line
  
  Do not set function to macro in umath anymore.
........
  r6182 | cdavid | 2008-12-21 15:03:19 +0900 (Sun, 21 Dec 2008) | 1 line
  
  Do not define math func as static: better to have a link error when we have a config problem than having two functions with the same name.
........
  r6185 | cdavid | 2008-12-22 01:19:14 +0900 (Mon, 22 Dec 2008) | 1 line
  
  Add doc sources so that sdist tarball contains them.
........
  r6186 | pierregm | 2008-12-22 19:01:51 +0900 (Mon, 22 Dec 2008) | 4 lines
  
  testutils:
  assert_array_compare : make sure that the comparison is performed on ndarrays, and make sure we use the np version of the comparison function.
  core:
  * Try not to touch the data in unary/binary ufuncs, (including inplace)
........
  r6187 | pearu | 2008-12-22 19:05:00 +0900 (Mon, 22 Dec 2008) | 1 line
  
  Fix a bug.
........



Property changes on: branches/dynamic_cpu_configuration
___________________________________________________________________
Name: svnmerge-integrated
   - /branches/distutils-revamp:1-2752 /branches/multicore:1-3687 /branches/visualstudio_manifest:1-6077 /trunk:1-6100
   + /branches/distutils-revamp:1-2752 /branches/multicore:1-3687 /branches/visualstudio_manifest:1-6077 /trunk:1-6187

Modified: branches/dynamic_cpu_configuration/THANKS.txt
===================================================================
--- branches/dynamic_cpu_configuration/THANKS.txt	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/THANKS.txt	2008-12-22 13:23:03 UTC (rev 6188)
@@ -35,15 +35,17 @@
     Valgrind expertise.
 David Cournapeau for build support, doc-and-bug fixes, and code
     contributions including fast_clipping.
-Jarrod Millman for release management, community coordination,
-    and cheerleading.
+Jarrod Millman for release management, community coordination, and code
+    clean up.
 Chris Burns for work on memory mapped arrays and bug-fixes.
 Pauli Virtanen for documentation, bug-fixes, lookfor and the
-     documentation editor.
+    documentation editor.
 A.M. Archibald for no-copy-reshape code, strided array tricks,
-     documentation and bug-fixes.
+    documentation and bug-fixes.
 Pierre Gerard-Marchant for rewriting masked array functionality.
 Roberto de Almeida for the buffered array iterator.
+Alan McIntyre for updating the NumPy test framework to use nose, improve
+    the test coverage, and enhancing the test system documentation 
 
 NumPy is based on the Numeric (Jim Hugunin, Paul Dubois, Konrad
 Hinsen, and David Ascher) and NumArray (Perry Greenfield, J Todd

Modified: branches/dynamic_cpu_configuration/doc/Makefile
===================================================================
--- branches/dynamic_cpu_configuration/doc/Makefile	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/doc/Makefile	2008-12-22 13:23:03 UTC (rev 6188)
@@ -22,30 +22,38 @@
 	@echo "  latex     to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
 	@echo "  changes   to make an overview over all changed/added/deprecated items"
 	@echo "  linkcheck to check all external links for integrity"
+	@echo "  upload USER=...  to upload results to docs.scipy.org"
 
 clean:
 	-rm -rf build/* source/reference/generated
 
+upload:
+	@test -e build/dist || { echo "make dist is required first"; exit 1; }
+	@test output-is-fine -nt build/dist || { \
+		echo "Review the output in build/dist, and do 'touch output-is-fine' before uploading."; exit 1; }
+	rsync -r -z --delete-after -p build/dist/ $(USER)@docs.scipy.org:/home/docserver/www-root/doc/numpy/
+
 dist: html
 	test -d build/latex || make latex
 	make -C build/latex all-pdf
+	-test -d build/htmlhelp || make htmlhelp-build
 	-rm -rf build/dist
 	cp -r build/html build/dist
 	perl -pi -e 's#^\s*(<li><a href=".*?">NumPy.*?Manual.*?»</li>)#<li><a href="/">Numpy and Scipy Documentation</a> »</li>#;' build/dist/*.html build/dist/*/*.html build/dist/*/*/*.html
 	cd build/html && zip -9r ../dist/numpy-html.zip .
-	cp build/latex/*.pdf build/dist
+	cp build/latex/numpy-*.pdf build/dist
+	-zip build/dist/numpy-chm.zip build/htmlhelp/numpy.chm
 	cd build/dist && tar czf ../dist.tar.gz *
+	chmod ug=rwX,o=rX -R build/dist
+	find build/dist -type d -print0 | xargs -0r chmod g+s
 
 generate: build/generate-stamp
-build/generate-stamp: $(wildcard source/reference/*.rst) ext
+build/generate-stamp: $(wildcard source/reference/*.rst)
 	mkdir -p build
-	./ext/autosummary_generate.py source/reference/*.rst \
+	./sphinxext/autosummary_generate.py source/reference/*.rst \
 		-p dump.xml -o source/reference/generated 
 	touch build/generate-stamp
 
-ext:
-	svn co http://sphinx.googlecode.com/svn/contrib/trunk/numpyext ext
-
 html: generate
 	mkdir -p build/html build/doctrees
 	$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) build/html
@@ -70,10 +78,15 @@
 	@echo "Build finished; now you can run HTML Help Workshop with the" \
 	      ".hhp project file in build/htmlhelp."
 
+htmlhelp-build: htmlhelp build/htmlhelp/numpy.chm
+%.chm: %.hhp
+	-hhc.exe $^
+
 latex: generate
 	mkdir -p build/latex build/doctrees
 	$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) build/latex
 	python postprocess.py tex build/latex/*.tex
+	perl -pi -e 's/\t(latex.*|pdflatex) (.*)/\t-$$1 -interaction batchmode $$2/' build/latex/Makefile
 	@echo
 	@echo "Build finished; the LaTeX files are in build/latex."
 	@echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \

Copied: branches/dynamic_cpu_configuration/doc/release/1.3.0-notes.rst (from rev 6149, trunk/doc/release/1.3.0-notes.rst)

Modified: branches/dynamic_cpu_configuration/doc/source/_templates/indexcontent.html
===================================================================
--- branches/dynamic_cpu_configuration/doc/source/_templates/indexcontent.html	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/doc/source/_templates/indexcontent.html	2008-12-22 13:23:03 UTC (rev 6188)
@@ -33,6 +33,7 @@
       <p class="biglink"><a class="biglink" href="{{ pathto("bugs") }}">Reporting bugs</a></p>
       <p class="biglink"><a class="biglink" href="{{ pathto("about") }}">About NumPy</a></p>
     </td><td width="50%">
+      <p class="biglink"><a class="biglink" href="{{ pathto("release") }}">Release Notes</a></p>
       <p class="biglink"><a class="biglink" href="{{ pathto("license") }}">License of Numpy</a></p>
     </td></tr>
   </table>

Modified: branches/dynamic_cpu_configuration/doc/source/conf.py
===================================================================
--- branches/dynamic_cpu_configuration/doc/source/conf.py	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/doc/source/conf.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -5,7 +5,7 @@
 # If your extensions are in another directory, add it here. If the directory
 # is relative to the documentation root, use os.path.abspath to make it
 # absolute, like shown here.
-sys.path.append(os.path.abspath('../ext'))
+sys.path.append(os.path.abspath('../sphinxext'))
 
 # Check Sphinx version
 import sphinx
@@ -21,7 +21,7 @@
 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
 extensions = ['sphinx.ext.autodoc', 'sphinx.ext.pngmath', 'numpydoc',
               'phantom_import', 'autosummary', 'sphinx.ext.intersphinx',
-              'sphinx.ext.coverage']
+              'sphinx.ext.coverage', 'only_directives']
 
 # Add any paths that contain templates here, relative to this directory.
 templates_path = ['_templates']
@@ -131,7 +131,7 @@
 #html_file_suffix = '.html'
 
 # Output file base name for HTML help builder.
-htmlhelp_basename = 'NumPydoc'
+htmlhelp_basename = 'numpy'
 
 # Pngmath should try to align formulas properly
 pngmath_use_preview = True
@@ -208,7 +208,7 @@
 phantom_import_file = 'dump.xml'
 
 # Edit links
-#numpydoc_edit_link = '`Edit </pydocweb/doc/%(full_name)s/>`__'
+numpydoc_edit_link = '`Edit </numpy/docs/%(full_name)s/>`__'
 
 # -----------------------------------------------------------------------------
 # Coverage checker

Modified: branches/dynamic_cpu_configuration/doc/source/contents.rst
===================================================================
--- branches/dynamic_cpu_configuration/doc/source/contents.rst	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/doc/source/contents.rst	2008-12-22 13:23:03 UTC (rev 6188)
@@ -6,6 +6,7 @@
    
    user/index
    reference/index
+   release
    about
    bugs
    license

Modified: branches/dynamic_cpu_configuration/doc/source/reference/routines.emath.rst
===================================================================
--- branches/dynamic_cpu_configuration/doc/source/reference/routines.emath.rst	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/doc/source/reference/routines.emath.rst	2008-12-22 13:23:03 UTC (rev 6188)
@@ -7,4 +7,4 @@
           available after :mod:`numpy` is imported.
 
 .. automodule:: numpy.lib.scimath
-   :members:
+

Modified: branches/dynamic_cpu_configuration/doc/source/reference/routines.ma.rst
===================================================================
--- branches/dynamic_cpu_configuration/doc/source/reference/routines.ma.rst	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/doc/source/reference/routines.ma.rst	2008-12-22 13:23:03 UTC (rev 6188)
@@ -5,48 +5,403 @@
 
 .. currentmodule:: numpy
 
+
+Constants
+=========
+
+.. autosummary::
+   :toctree: generated/
+   
+   ma.masked
+   ma.nomask
+   
+   ma.MaskType
+
+
 Creation
---------
+========
 
+From existing data
+~~~~~~~~~~~~~~~~~~
+
 .. autosummary::
    :toctree: generated/
 
    ma.masked_array
+   ma.array
+   ma.copy
+   ma.frombuffer
+   ma.fromfunction
 
-Converting to ndarray
----------------------
+   ma.MaskedArray.copy
 
+
+Ones and zeros
+~~~~~~~~~~~~~~
+
 .. autosummary::
    :toctree: generated/
+   
+   ma.empty
+   ma.empty_like
+   ma.masked_all
+   ma.masked_all_like
+   ma.ones
+   ma.zeros
 
-   ma.filled
-   ma.common_fill_value
-   ma.default_fill_value
-   ma.masked_array.get_fill_value
-   ma.maximum_fill_value
-   ma.minimum_fill_value
 
+_____
+
 Inspecting the array
---------------------
+====================
 
 .. autosummary::
    :toctree: generated/
 
+   ma.all
+   ma.any
+   ma.count
+   ma.count_masked
    ma.getmask
    ma.getmaskarray
    ma.getdata
-   ma.count_masked
+   ma.nonzero
+   ma.shape
+   ma.size
+   
+   ma.MaskedArray.data
+   ma.MaskedArray.mask
+   ma.MaskedArray.recordmask
+   
+   ma.MaskedArray.all
+   ma.MaskedArray.any
+   ma.MaskedArray.count
+   ma.MaskedArray.nonzero
+   ma.shape
+   ma.size
 
-Modifying the mask
-------------------
 
+_____
+
+Manipulating a MaskedArray
+==========================
+
+Changing the shape
+~~~~~~~~~~~~~~~~~~
+
 .. autosummary::
    :toctree: generated/
+   
+   ma.ravel
+   ma.reshape
+   ma.resize
 
+   ma.MaskedArray.flatten
+   ma.MaskedArray.ravel
+   ma.MaskedArray.reshape
+   ma.MaskedArray.resize
+
+
+Modifying axes
+~~~~~~~~~~~~~~
+.. autosummary::
+   :toctree: generated/
+   
+   ma.swapaxes
+   ma.transpose
+   
+   ma.MaskedArray.swapaxes
+   ma.MaskedArray.transpose
+
+
+Changing the number of dimensions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+.. autosummary::
+   :toctree: generated/
+   
+   ma.atleast_1d
+   ma.atleast_2d
+   ma.atleast_3d
+   ma.expand_dims
+   ma.squeeze
+
+   ma.MaskedArray.squeeze
+   
+   ma.column_stack
+   ma.concatenate
+   ma.dstack
+   ma.hstack
+   ma.hsplit
+   ma.mr_
+   ma.row_stack
+   ma.vstack
+
+
+Joining arrays
+~~~~~~~~~~~~~~
+.. autosummary::
+   :toctree: generated/
+
+   ma.column_stack 
+   ma.concatenate 
+   ma.dstack 
+   ma.hstack 
+   ma.vstack
+
+
+_____
+
+Operations on masks
+===================
+
+Creating a mask
+~~~~~~~~~~~~~~~
+.. autosummary::
+   :toctree: generated/
+
    ma.make_mask
+   ma.make_mask_none
+   ma.mask_or
+   ma.make_mask_descr
+
+
+Accessing a mask
+~~~~~~~~~~~~~~~~
+.. autosummary::
+   :toctree: generated/
+
+   ma.getmask
+   ma.getmaskarray
+   ma.masked_array.mask
+
+
+Finding masked data
+~~~~~~~~~~~~~~~~~~~
+.. autosummary::
+   :toctree: generated/
+
+   ma.flatnotmasked_contiguous
+   ma.flatnotmasked_edges
+   ma.notmasked_contiguous
+   ma.notmasked_edges
+
+
+Modifying a mask
+~~~~~~~~~~~~~~~~
+.. autosummary::
+   :toctree: generated/
+
    ma.mask_cols
    ma.mask_or
    ma.mask_rowcols
    ma.mask_rows
    ma.harden_mask
-   ma.ids
+   ma.soften_mask
+   
+   ma.MaskedArray.harden_mask
+   ma.MaskedArray.soften_mask
+   ma.MaskedArray.shrink_mask
+   ma.MaskedArray.unshare_mask
+
+
+_____
+
+Conversion operations
+======================
+
+> to a masked array
+~~~~~~~~~~~~~~~~~~~
+.. autosummary::
+   :toctree: generated/
+
+   ma.asarray
+   ma.asanyarray
+   ma.fix_invalid
+   ma.masked_equal
+   ma.masked_greater
+   ma.masked_greater_equal
+   ma.masked_inside
+   ma.masked_invalid
+   ma.masked_less
+   ma.masked_less_equal
+   ma.masked_not_equal
+   ma.masked_object
+   ma.masked_outside
+   ma.masked_values
+   ma.masked_where
+
+
+> to a ndarray
+~~~~~~~~~~~~~~
+.. autosummary::
+   :toctree: generated/
+
+   ma.compress_cols
+   ma.compress_rowcols
+   ma.compress_rows
+   ma.compressed
+   ma.filled
+   
+   ma.MaskedArray.compressed
+   ma.MaskedArray.filled
+
+
+> to another object
+~~~~~~~~~~~~~~~~~~~
+.. autosummary::
+   :toctree: generated/
+   
+   ma.MaskedArray.tofile
+   ma.MaskedArray.tolist
+   ma.MaskedArray.torecords
+   ma.MaskedArray.tostring
+
+
+Pickling and unpickling
+~~~~~~~~~~~~~~~~~~~~~~~
+.. autosummary::
+   :toctree: generated/
+   
+   ma.dump
+   ma.dumps
+   ma.load
+   ma.loads
+
+
+Filling a masked array
+~~~~~~~~~~~~~~~~~~~~~~
+.. autosummary::
+   :toctree: generated/
+
+   ma.common_fill_value
+   ma.default_fill_value
+   ma.maximum_fill_value
+   ma.maximum_fill_value
+   ma.set_fill_value
+   
+   ma.MaskedArray.get_fill_value
+   ma.MaskedArray.set_fill_value
+   ma.MaskedArray.fill_value
+
+
+_____
+
+Masked arrays arithmetics
+=========================
+
+Arithmetics
+~~~~~~~~~~~
+.. autosummary::
+   :toctree: generated/
+   
+   ma.anom
+   ma.anomalies
+   ma.average
+   ma.conjugate
+   ma.corrcoef
+   ma.cov
+   ma.cumsum
+   ma.cumprod
+   ma.mean
+   ma.median
+   ma.power
+   ma.prod
+   ma.std
+   ma.sum
+   ma.var
+   
+   ma.MaskedArray.anom
+   ma.MaskedArray.cumprod
+   ma.MaskedArray.cumsum
+   ma.MaskedArray.mean
+   ma.MaskedArray.prod
+   ma.MaskedArray.std
+   ma.MaskedArray.sum
+   ma.MaskedArray.var
+
+
+Minimum/maximum
+~~~~~~~~~~~~~~~
+.. autosummary::
+   :toctree: generated/
+
+   ma.argmax
+   ma.argmin
+   ma.max
+   ma.min
+   ma.ptp
+
+   ma.MaskedArray.argmax
+   ma.MaskedArray.argmin
+   ma.MaskedArray.max
+   ma.MaskedArray.min
+   ma.MaskedArray.ptp
+
+
+Sorting
+~~~~~~~
+.. autosummary::
+   :toctree: generated/
+
+   ma.argsort
+   ma.sort
+   ma.MaskedArray.argsort
+   ma.MaskedArray.sort
+
+
+Algebra
+~~~~~~~
+.. autosummary::
+   :toctree: generated/
+
+   ma.diag
+   ma.dot
+   ma.identity
+   ma.inner
+   ma.innerproduct
+   ma.outer
+   ma.outerproduct
+   ma.trace
+   ma.transpose
+
+   ma.MaskedArray.trace
+   ma.MaskedArray.transpose
+
+
+Polynomial fit
+~~~~~~~~~~~~~~
+.. autosummary::
+   :toctree: generated/
+   
+   ma.vander
+   ma.polyfit
+
+
+Clipping and rounding
+~~~~~~~~~~~~~~~~~~~~~
+.. autosummary::
+   :toctree: generated/
+
+   ma.around
+   ma.clip
+   ma.round
+
+   ma.MaskedArray.clip
+   ma.MaskedArray.round
+
+
+Miscellanea
+~~~~~~~~~~~
+.. autosummary::
+   :toctree: generated/
+
+   ma.allequal
+   ma.allclose
+   ma.apply_along_axis
+   ma.arange
+   ma.choose
+   ma.ediff1d
+   ma.indices
+   ma.where
+
+

Modified: branches/dynamic_cpu_configuration/doc/source/reference/routines.matlib.rst
===================================================================
--- branches/dynamic_cpu_configuration/doc/source/reference/routines.matlib.rst	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/doc/source/reference/routines.matlib.rst	2008-12-22 13:23:03 UTC (rev 6188)
@@ -8,4 +8,4 @@
 <matrix>` instead of :class:`ndarrays <ndarray>`.
 
 .. automodule:: numpy.matlib
-   :members:
+

Modified: branches/dynamic_cpu_configuration/doc/source/reference/routines.numarray.rst
===================================================================
--- branches/dynamic_cpu_configuration/doc/source/reference/routines.numarray.rst	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/doc/source/reference/routines.numarray.rst	2008-12-22 13:23:03 UTC (rev 6188)
@@ -3,4 +3,4 @@
 **********************************************
 
 .. automodule:: numpy.numarray
-   :members:
+

Modified: branches/dynamic_cpu_configuration/doc/source/reference/routines.oldnumeric.rst
===================================================================
--- branches/dynamic_cpu_configuration/doc/source/reference/routines.oldnumeric.rst	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/doc/source/reference/routines.oldnumeric.rst	2008-12-22 13:23:03 UTC (rev 6188)
@@ -5,4 +5,4 @@
 .. currentmodule:: numpy
 
 .. automodule:: numpy.oldnumeric
-   :members:
+

Modified: branches/dynamic_cpu_configuration/doc/source/reference/ufuncs.rst
===================================================================
--- branches/dynamic_cpu_configuration/doc/source/reference/ufuncs.rst	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/doc/source/reference/ufuncs.rst	2008-12-22 13:23:03 UTC (rev 6188)
@@ -77,20 +77,20 @@
    with a dimension of length 1 to satisfy property 2.
 
 .. admonition:: Example
- 
+
    If ``a.shape`` is (5,1), ``b.shape`` is (1,6), ``c.shape`` is (6,)
    and d.shape is ``()`` so that d is a scalar, then *a*, *b*, *c*,
    and *d* are all broadcastable to dimension (5,6); and
-   
+
    - *a* acts like a (5,6) array where ``a[:,0]`` is broadcast to the other
      columns,
-   
+
    - *b* acts like a (5,6) array where ``b[0,:]`` is broadcast
      to the other rows,
-        
+
    - *c* acts like a (1,6) array and therefore like a (5,6) array
      where ``c[:]` is broadcast to every row, and finally,
-   
+
    - *d* acts like a (5,6) array where the single value is repeated.
 
 
@@ -205,8 +205,8 @@
 
 .. admonition:: Figure
 
-    Code segment showing the can cast safely table for a 32-bit system. 
-    
+    Code segment showing the can cast safely table for a 32-bit system.
+
     >>> def print_table(ntypes):
     ...     print 'X',
     ...     for char in ntypes: print char,
@@ -245,7 +245,7 @@
 You should note that, while included in the table for completeness,
 the 'S', 'U', and 'V' types cannot be operated on by ufuncs. Also,
 note that on a 64-bit system the integer types may have different
-sizes resulting in a slightly altered table. 
+sizes resulting in a slightly altered table.
 
 Mixed scalar-array operations use a different set of casting rules
 that ensure that a scalar cannot upcast an array unless the scalar is
@@ -264,7 +264,7 @@
 --------------------------
 
 All ufuncs take optional keyword arguments. These represent rather
-advanced usage and will likely not be used by most users. 
+advanced usage and will likely not be used by most users.
 
 .. index::
    pair: ufunc; keyword arguments
@@ -296,7 +296,7 @@
 ----------
 
 There are some informational attributes that universal functions
-possess. None of the attributes can be set. 
+possess. None of the attributes can be set.
 
 .. index::
    pair: ufunc; attributes
@@ -316,7 +316,7 @@
 
    ufunc.nin
    ufunc.nout
-   ufunc.nargs 
+   ufunc.nargs
    ufunc.ntypes
    ufunc.types
    ufunc.identity
@@ -386,7 +386,7 @@
 .. note::
 
     The ufunc still returns its output(s) even if you use the optional
-    output argument(s). 
+    output argument(s).
 
 Math operations
 ---------------
@@ -398,6 +398,7 @@
     multiply
     divide
     logaddexp
+    logaddexp2
     true_divide
     floor_divide
     negative
@@ -410,10 +411,12 @@
     sign
     conj
     exp
+    exp2
     log
+    log2
+    log10
     expm1
     log1p
-    log10
     sqrt
     square
     reciprocal
@@ -433,7 +436,7 @@
 Trigonometric functions
 -----------------------
 All trigonometric functions use radians when an angle is called for.
-The ratio of degrees to radians is :math:`180^{\circ}/\pi.` 
+The ratio of degrees to radians is :math:`180^{\circ}/\pi.`
 
 .. autosummary::
 
@@ -458,7 +461,7 @@
 -----------------------
 
 These function all need integer arguments and they maniuplate the bit-
-pattern of those arguments. 
+pattern of those arguments.
 
 .. autosummary::
 
@@ -501,7 +504,7 @@
     element-by-element array comparisons. Be sure to understand the
     operator precedence: (a>2) & (a<5) is the proper syntax because a>2 &
     a<5 will result in an error due to the fact that 2 & a is evaluated
-    first. 
+    first.
 
 .. autosummary::
 
@@ -514,7 +517,7 @@
     method of the maximum ufunc is much faster. Also, the max() method
     will not give answers you might expect for arrays with greater than
     one dimension. The reduce method of minimum also allows you to compute
-    a total minimum over an array. 
+    a total minimum over an array.
 
 .. autosummary::
 
@@ -528,7 +531,7 @@
     two arrays is larger. In contrast, max(a,b) treats the objects a and b
     as a whole, looks at the (total) truth value of a>b and uses it to
     return either a or b (as a whole). A similar difference exists between
-    minimum(a,b) and min(a,b). 
+    minimum(a,b) and min(a,b).
 
 
 Floating functions
@@ -536,7 +539,7 @@
 
 Recall that all of these functions work element-by-element over an
 array, returning an array output. The description details only a
-single operation. 
+single operation.
 
 .. autosummary::
 

Copied: branches/dynamic_cpu_configuration/doc/source/release.rst (from rev 6149, trunk/doc/source/release.rst)

Copied: branches/dynamic_cpu_configuration/doc/sphinxext (from rev 6149, trunk/doc/sphinxext)

Deleted: branches/dynamic_cpu_configuration/doc/sphinxext/LICENSE.txt
===================================================================
--- trunk/doc/sphinxext/LICENSE.txt	2008-12-16 18:53:25 UTC (rev 6149)
+++ branches/dynamic_cpu_configuration/doc/sphinxext/LICENSE.txt	2008-12-22 13:23:03 UTC (rev 6188)
@@ -1,97 +0,0 @@
--------------------------------------------------------------------------------
-    The files
-    - numpydoc.py
-    - autosummary.py
-    - autosummary_generate.py
-    - docscrape.py
-    - docscrape_sphinx.py
-    - phantom_import.py
-    have the following license:
-
-Copyright (C) 2008 Stefan van der Walt <stefan at mentat.za.net>, Pauli Virtanen <pav at iki.fi>
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are
-met:
-
- 1. Redistributions of source code must retain the above copyright
-    notice, this list of conditions and the following disclaimer.
- 2. Redistributions in binary form must reproduce the above copyright
-    notice, this list of conditions and the following disclaimer in
-    the documentation and/or other materials provided with the
-    distribution.
-
-THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
-IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT,
-INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
-SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
-HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
-STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
-IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-POSSIBILITY OF SUCH DAMAGE.
-
--------------------------------------------------------------------------------
-    The files
-    - compiler_unparse.py
-    - comment_eater.py
-    - traitsdoc.py
-    have the following license:
-
-This software is OSI Certified Open Source Software.
-OSI Certified is a certification mark of the Open Source Initiative.
-
-Copyright (c) 2006, Enthought, Inc.
-All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are met:
-
- * Redistributions of source code must retain the above copyright notice, this
-   list of conditions and the following disclaimer.
- * Redistributions in binary form must reproduce the above copyright notice,
-   this list of conditions and the following disclaimer in the documentation
-   and/or other materials provided with the distribution.
- * Neither the name of Enthought, Inc. nor the names of its contributors may
-   be used to endorse or promote products derived from this software without
-   specific prior written permission.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
-ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
-ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
-ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-
--------------------------------------------------------------------------------
-    The files
-    - only_directives.py
-    - plot_directive.py
-    originate from Matplotlib (http://matplotlib.sf.net/) which has
-    the following license:
-
-Copyright (c) 2002-2008 John D. Hunter; All Rights Reserved.
-
-1. This LICENSE AGREEMENT is between John D. Hunter (“JDH”), and the Individual or Organization (“Licensee”) accessing and otherwise using matplotlib software in source or binary form and its associated documentation.
-
-2. Subject to the terms and conditions of this License Agreement, JDH hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use matplotlib 0.98.3 alone or in any derivative version, provided, however, that JDH’s License Agreement and JDH’s notice of copyright, i.e., “Copyright (c) 2002-2008 John D. Hunter; All Rights Reserved” are retained in matplotlib 0.98.3 alone or in any derivative version prepared by Licensee.
-
-3. In the event Licensee prepares a derivative work that is based on or incorporates matplotlib 0.98.3 or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to matplotlib 0.98.3.
-
-4. JDH is making matplotlib 0.98.3 available to Licensee on an “AS IS” basis. JDH MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, JDH MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF MATPLOTLIB 0.98.3 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
-
-5. JDH SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF MATPLOTLIB 0.98.3 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING MATPLOTLIB 0.98.3, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
-
-6. This License Agreement will automatically terminate upon a material breach of its terms and conditions.
-
-7. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between JDH and Licensee. This License Agreement does not grant permission to use JDH trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party.
-
-8. By copying, installing or otherwise using matplotlib 0.98.3, Licensee agrees to be bound by the terms and conditions of this License Agreement.
-

Copied: branches/dynamic_cpu_configuration/doc/sphinxext/LICENSE.txt (from rev 6149, trunk/doc/sphinxext/LICENSE.txt)

Deleted: branches/dynamic_cpu_configuration/doc/sphinxext/__init__.py
===================================================================

Copied: branches/dynamic_cpu_configuration/doc/sphinxext/__init__.py (from rev 6149, trunk/doc/sphinxext/__init__.py)

Deleted: branches/dynamic_cpu_configuration/doc/sphinxext/autosummary.py
===================================================================
--- trunk/doc/sphinxext/autosummary.py	2008-12-16 18:53:25 UTC (rev 6149)
+++ branches/dynamic_cpu_configuration/doc/sphinxext/autosummary.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -1,334 +0,0 @@
-"""
-===========
-autosummary
-===========
-
-Sphinx extension that adds an autosummary:: directive, which can be
-used to generate function/method/attribute/etc. summary lists, similar
-to those output eg. by Epydoc and other API doc generation tools.
-
-An :autolink: role is also provided.
-
-autosummary directive
----------------------
-
-The autosummary directive has the form::
-
-    .. autosummary::
-       :nosignatures:
-       :toctree: generated/
-       
-       module.function_1
-       module.function_2
-       ...
-
-and it generates an output table (containing signatures, optionally)
-
-    ========================  =============================================
-    module.function_1(args)   Summary line from the docstring of function_1
-    module.function_2(args)   Summary line from the docstring
-    ...
-    ========================  =============================================
-
-If the :toctree: option is specified, files matching the function names
-are inserted to the toctree with the given prefix:
-
-    generated/module.function_1
-    generated/module.function_2
-    ...
-
-Note: The file names contain the module:: or currentmodule:: prefixes.
-
-.. seealso:: autosummary_generate.py
-
-
-autolink role
--------------
-
-The autolink role functions as ``:obj:`` when the name referred can be
-resolved to a Python object, and otherwise it becomes simple emphasis.
-This can be used as the default role to make links 'smart'.
-
-"""
-import sys, os, posixpath, re
-
-from docutils.parsers.rst import directives
-from docutils.statemachine import ViewList
-from docutils import nodes
-
-import sphinx.addnodes, sphinx.roles, sphinx.builder
-from sphinx.util import patfilter
-
-from docscrape_sphinx import get_doc_object
-
-
-def setup(app):
-    app.add_directive('autosummary', autosummary_directive, True, (0, 0, False),
-                      toctree=directives.unchanged,
-                      nosignatures=directives.flag)
-    app.add_role('autolink', autolink_role)
-    
-    app.add_node(autosummary_toc,
-                 html=(autosummary_toc_visit_html, autosummary_toc_depart_noop),
-                 latex=(autosummary_toc_visit_latex, autosummary_toc_depart_noop))
-    app.connect('doctree-read', process_autosummary_toc)
-
-#------------------------------------------------------------------------------
-# autosummary_toc node
-#------------------------------------------------------------------------------
-
-class autosummary_toc(nodes.comment):
-    pass
-
-def process_autosummary_toc(app, doctree):
-    """
-    Insert items described in autosummary:: to the TOC tree, but do
-    not generate the toctree:: list.
-
-    """
-    env = app.builder.env
-    crawled = {}
-    def crawl_toc(node, depth=1):
-        crawled[node] = True
-        for j, subnode in enumerate(node):
-            try:
-                if (isinstance(subnode, autosummary_toc)
-                    and isinstance(subnode[0], sphinx.addnodes.toctree)):
-                    env.note_toctree(env.docname, subnode[0])
-                    continue
-            except IndexError:
-                continue
-            if not isinstance(subnode, nodes.section):
-                continue
-            if subnode not in crawled:
-                crawl_toc(subnode, depth+1)
-    crawl_toc(doctree)
-
-def autosummary_toc_visit_html(self, node):
-    """Hide autosummary toctree list in HTML output"""
-    raise nodes.SkipNode
-
-def autosummary_toc_visit_latex(self, node):
-    """Show autosummary toctree (= put the referenced pages here) in Latex"""
-    pass
-
-def autosummary_toc_depart_noop(self, node):
-    pass
-
-#------------------------------------------------------------------------------
-# .. autosummary::
-#------------------------------------------------------------------------------
-
-def autosummary_directive(dirname, arguments, options, content, lineno,
-                          content_offset, block_text, state, state_machine):
-    """
-    Pretty table containing short signatures and summaries of functions etc.
-
-    autosummary also generates a (hidden) toctree:: node.
-
-    """
-
-    names = []
-    names += [x.strip() for x in content if x.strip()]
-
-    table, warnings, real_names = get_autosummary(names, state,
-                                                  'nosignatures' in options)
-    node = table
-
-    env = state.document.settings.env
-    suffix = env.config.source_suffix
-    all_docnames = env.found_docs.copy()
-    dirname = posixpath.dirname(env.docname)
-
-    if 'toctree' in options:
-        tree_prefix = options['toctree'].strip()
-        docnames = []
-        for name in names:
-            name = real_names.get(name, name)
-
-            docname = tree_prefix + name
-            if docname.endswith(suffix):
-                docname = docname[:-len(suffix)]
-            docname = posixpath.normpath(posixpath.join(dirname, docname))
-            if docname not in env.found_docs:
-                warnings.append(state.document.reporter.warning(
-                    'toctree references unknown document %r' % docname,
-                    line=lineno))
-            docnames.append(docname)
-
-        tocnode = sphinx.addnodes.toctree()
-        tocnode['includefiles'] = docnames
-        tocnode['maxdepth'] = -1
-        tocnode['glob'] = None
-
-        tocnode = autosummary_toc('', '', tocnode)
-        return warnings + [node] + [tocnode]
-    else:
-        return warnings + [node]
-
-def get_autosummary(names, state, no_signatures=False):
-    """
-    Generate a proper table node for autosummary:: directive.
-
-    Parameters
-    ----------
-    names : list of str
-        Names of Python objects to be imported and added to the table.
-    document : document
-        Docutils document object
-    
-    """
-    document = state.document
-    
-    real_names = {}
-    warnings = []
-
-    prefixes = ['']
-    prefixes.insert(0, document.settings.env.currmodule)
-
-    table = nodes.table('')
-    group = nodes.tgroup('', cols=2)
-    table.append(group)
-    group.append(nodes.colspec('', colwidth=30))
-    group.append(nodes.colspec('', colwidth=70))
-    body = nodes.tbody('')
-    group.append(body)
-
-    def append_row(*column_texts):
-        row = nodes.row('')
-        for text in column_texts:
-            node = nodes.paragraph('')
-            vl = ViewList()
-            vl.append(text, '<autosummary>')
-            state.nested_parse(vl, 0, node)
-            row.append(nodes.entry('', node))
-        body.append(row)
-
-    for name in names:
-        try:
-            obj, real_name = import_by_name(name, prefixes=prefixes)
-        except ImportError:
-            warnings.append(document.reporter.warning(
-                'failed to import %s' % name))
-            append_row(":obj:`%s`" % name, "")
-            continue
-
-        real_names[name] = real_name
-
-        doc = get_doc_object(obj)
-
-        if doc['Summary']:
-            title = " ".join(doc['Summary'])
-        else:
-            title = ""
-        
-        col1 = ":obj:`%s <%s>`" % (name, real_name)
-        if doc['Signature']:
-            sig = re.sub('^[a-zA-Z_0-9.-]*', '', doc['Signature'])
-            if '=' in sig:
-                # abbreviate optional arguments
-                sig = re.sub(r', ([a-zA-Z0-9_]+)=', r'[, \1=', sig, count=1)
-                sig = re.sub(r'\(([a-zA-Z0-9_]+)=', r'([\1=', sig, count=1)
-                sig = re.sub(r'=[^,)]+,', ',', sig)
-                sig = re.sub(r'=[^,)]+\)$', '])', sig)
-                # shorten long strings
-                sig = re.sub(r'(\[.{16,16}[^,)]*?),.*?\]\)', r'\1, ...])', sig)
-            else:
-                sig = re.sub(r'(\(.{16,16}[^,)]*?),.*?\)', r'\1, ...)', sig)
-            col1 += " " + sig
-        col2 = title
-        append_row(col1, col2)
-
-    return table, warnings, real_names
-
-def import_by_name(name, prefixes=[None]):
-    """
-    Import a Python object that has the given name, under one of the prefixes.
-
-    Parameters
-    ----------
-    name : str
-        Name of a Python object, eg. 'numpy.ndarray.view'
-    prefixes : list of (str or None), optional
-        Prefixes to prepend to the name (None implies no prefix).
-        The first prefixed name that results to successful import is used.
-
-    Returns
-    -------
-    obj
-        The imported object
-    name
-        Name of the imported object (useful if `prefixes` was used)
-    
-    """
-    for prefix in prefixes:
-        try:
-            if prefix:
-                prefixed_name = '.'.join([prefix, name])
-            else:
-                prefixed_name = name
-            return _import_by_name(prefixed_name), prefixed_name
-        except ImportError:
-            pass
-    raise ImportError
-
-def _import_by_name(name):
-    """Import a Python object given its full name"""
-    try:
-        # try first interpret `name` as MODNAME.OBJ
-        name_parts = name.split('.')
-        try:
-            modname = '.'.join(name_parts[:-1])
-            __import__(modname)
-            return getattr(sys.modules[modname], name_parts[-1])
-        except (ImportError, IndexError, AttributeError):
-            pass
-       
-        # ... then as MODNAME, MODNAME.OBJ1, MODNAME.OBJ1.OBJ2, ...
-        last_j = 0
-        modname = None
-        for j in reversed(range(1, len(name_parts)+1)):
-            last_j = j
-            modname = '.'.join(name_parts[:j])
-            try:
-                __import__(modname)
-            except ImportError:
-                continue
-            if modname in sys.modules:
-                break
-
-        if last_j < len(name_parts):
-            obj = sys.modules[modname]
-            for obj_name in name_parts[last_j:]:
-                obj = getattr(obj, obj_name)
-            return obj
-        else:
-            return sys.modules[modname]
-    except (ValueError, ImportError, AttributeError, KeyError), e:
-        raise ImportError(e)
-
-#------------------------------------------------------------------------------
-# :autolink: (smart default role)
-#------------------------------------------------------------------------------
-
-def autolink_role(typ, rawtext, etext, lineno, inliner,
-                  options={}, content=[]):
-    """
-    Smart linking role.
-
-    Expands to ":obj:`text`" if `text` is an object that can be imported;
-    otherwise expands to "*text*".
-    """
-    r = sphinx.roles.xfileref_role('obj', rawtext, etext, lineno, inliner,
-                                   options, content)
-    pnode = r[0][0]
-
-    prefixes = [None]
-    #prefixes.insert(0, inliner.document.settings.env.currmodule)
-    try:
-        obj, name = import_by_name(pnode['reftarget'], prefixes)
-    except ImportError:
-        content = pnode[0]
-        r[0][0] = nodes.emphasis(rawtext, content[0].astext(),
-                                 classes=content['classes'])
-    return r

Copied: branches/dynamic_cpu_configuration/doc/sphinxext/autosummary.py (from rev 6149, trunk/doc/sphinxext/autosummary.py)

Deleted: branches/dynamic_cpu_configuration/doc/sphinxext/autosummary_generate.py
===================================================================
--- trunk/doc/sphinxext/autosummary_generate.py	2008-12-16 18:53:25 UTC (rev 6149)
+++ branches/dynamic_cpu_configuration/doc/sphinxext/autosummary_generate.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -1,219 +0,0 @@
-#!/usr/bin/env python
-r"""
-autosummary_generate.py OPTIONS FILES
-
-Generate automatic RST source files for items referred to in
-autosummary:: directives.
-
-Each generated RST file contains a single auto*:: directive which
-extracts the docstring of the referred item.
-
-Example Makefile rule::
-
-    generate:
-            ./ext/autosummary_generate.py -o source/generated source/*.rst
-
-"""
-import glob, re, inspect, os, optparse, pydoc
-from autosummary import import_by_name
-
-try:
-    from phantom_import import import_phantom_module
-except ImportError:
-    import_phantom_module = lambda x: x
-
-def main():
-    p = optparse.OptionParser(__doc__.strip())
-    p.add_option("-p", "--phantom", action="store", type="string",
-                 dest="phantom", default=None,
-                 help="Phantom import modules from a file")
-    p.add_option("-o", "--output-dir", action="store", type="string",
-                 dest="output_dir", default=None,
-                 help=("Write all output files to the given directory (instead "
-                       "of writing them as specified in the autosummary:: "
-                       "directives)"))
-    options, args = p.parse_args()
-
-    if len(args) == 0:
-        p.error("wrong number of arguments")
-
-    if options.phantom and os.path.isfile(options.phantom):
-        import_phantom_module(options.phantom)
-
-    # read
-    names = {}
-    for name, loc in get_documented(args).items():
-        for (filename, sec_title, keyword, toctree) in loc:
-            if toctree is not None:
-                path = os.path.join(os.path.dirname(filename), toctree)
-                names[name] = os.path.abspath(path)
-
-    # write
-    for name, path in sorted(names.items()):
-        if options.output_dir is not None:
-            path = options.output_dir
-        
-        if not os.path.isdir(path):
-            os.makedirs(path)
-
-        try:
-            obj, name = import_by_name(name)
-        except ImportError, e:
-            print "Failed to import '%s': %s" % (name, e)
-            continue
-
-        fn = os.path.join(path, '%s.rst' % name)
-
-        if os.path.exists(fn):
-            # skip
-            continue
-
-        f = open(fn, 'w')
-
-        try:
-            f.write('%s\n%s\n\n' % (name, '='*len(name)))
-
-            if inspect.isclass(obj):
-                if issubclass(obj, Exception):
-                    f.write(format_modulemember(name, 'autoexception'))
-                else:
-                    f.write(format_modulemember(name, 'autoclass'))
-            elif inspect.ismodule(obj):
-                f.write(format_modulemember(name, 'automodule'))
-            elif inspect.ismethod(obj) or inspect.ismethoddescriptor(obj):
-                f.write(format_classmember(name, 'automethod'))
-            elif callable(obj):
-                f.write(format_modulemember(name, 'autofunction'))
-            elif hasattr(obj, '__get__'):
-                f.write(format_classmember(name, 'autoattribute'))
-            else:
-                f.write(format_modulemember(name, 'autofunction'))
-        finally:
-            f.close()
-
-def format_modulemember(name, directive):
-    parts = name.split('.')
-    mod, name = '.'.join(parts[:-1]), parts[-1]
-    return ".. currentmodule:: %s\n\n.. %s:: %s\n" % (mod, directive, name)
-
-def format_classmember(name, directive):
-    parts = name.split('.')
-    mod, name = '.'.join(parts[:-2]), '.'.join(parts[-2:])
-    return ".. currentmodule:: %s\n\n.. %s:: %s\n" % (mod, directive, name)
-
-def get_documented(filenames):
-    """
-    Find out what items are documented in source/*.rst
-    See `get_documented_in_lines`.
-
-    """
-    documented = {}
-    for filename in filenames:
-        f = open(filename, 'r')
-        lines = f.read().splitlines()
-        documented.update(get_documented_in_lines(lines, filename=filename))
-        f.close()
-    return documented
-
-def get_documented_in_docstring(name, module=None, filename=None):
-    """
-    Find out what items are documented in the given object's docstring.
-    See `get_documented_in_lines`.
-    
-    """
-    try:
-        obj, real_name = import_by_name(name)
-        lines = pydoc.getdoc(obj).splitlines()
-        return get_documented_in_lines(lines, module=name, filename=filename)
-    except AttributeError:
-        pass
-    except ImportError, e:
-        print "Failed to import '%s': %s" % (name, e)
-    return {}
-
-def get_documented_in_lines(lines, module=None, filename=None):
-    """
-    Find out what items are documented in the given lines
-    
-    Returns
-    -------
-    documented : dict of list of (filename, title, keyword, toctree)
-        Dictionary whose keys are documented names of objects.
-        The value is a list of locations where the object was documented.
-        Each location is a tuple of filename, the current section title,
-        the name of the directive, and the value of the :toctree: argument
-        (if present) of the directive.
-
-    """
-    title_underline_re = re.compile("^[-=*_^#]{3,}\s*$")
-    autodoc_re = re.compile(".. auto(function|method|attribute|class|exception|module)::\s*([A-Za-z0-9_.]+)\s*$")
-    autosummary_re = re.compile(r'^\.\.\s+autosummary::\s*')
-    module_re = re.compile(r'^\.\.\s+(current)?module::\s*([a-zA-Z0-9_.]+)\s*$')
-    autosummary_item_re = re.compile(r'^\s+([_a-zA-Z][a-zA-Z0-9_.]*)\s*')
-    toctree_arg_re = re.compile(r'^\s+:toctree:\s*(.*?)\s*$')
-    
-    documented = {}
-   
-    current_title = []
-    last_line = None
-    toctree = None
-    current_module = module
-    in_autosummary = False
-    
-    for line in lines:
-        try:
-            if in_autosummary:
-                m = toctree_arg_re.match(line)
-                if m:
-                    toctree = m.group(1)
-                    continue
-
-                if line.strip().startswith(':'):
-                    continue # skip options
-
-                m = autosummary_item_re.match(line)
-                if m:
-                    name = m.group(1).strip()
-                    if current_module and not name.startswith(current_module + '.'):
-                        name = "%s.%s" % (current_module, name)
-                    documented.setdefault(name, []).append(
-                        (filename, current_title, 'autosummary', toctree))
-                    continue
-                if line.strip() == '':
-                    continue
-                in_autosummary = False
-
-            m = autosummary_re.match(line)
-            if m:
-                in_autosummary = True
-                continue
-
-            m = autodoc_re.search(line)
-            if m:
-                name = m.group(2).strip()
-                if m.group(1) == "module":
-                    current_module = name
-                    documented.update(get_documented_in_docstring(
-                        name, filename=filename))
-                elif current_module and not name.startswith(current_module+'.'):
-                    name = "%s.%s" % (current_module, name)
-                documented.setdefault(name, []).append(
-                    (filename, current_title, "auto" + m.group(1), None))
-                continue
-
-            m = title_underline_re.match(line)
-            if m and last_line:
-                current_title = last_line.strip()
-                continue
-
-            m = module_re.match(line)
-            if m:
-                current_module = m.group(2)
-                continue
-        finally:
-            last_line = line
-
-    return documented
-
-if __name__ == "__main__":
-    main()

Copied: branches/dynamic_cpu_configuration/doc/sphinxext/autosummary_generate.py (from rev 6149, trunk/doc/sphinxext/autosummary_generate.py)

Deleted: branches/dynamic_cpu_configuration/doc/sphinxext/comment_eater.py
===================================================================
--- trunk/doc/sphinxext/comment_eater.py	2008-12-16 18:53:25 UTC (rev 6149)
+++ branches/dynamic_cpu_configuration/doc/sphinxext/comment_eater.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -1,158 +0,0 @@
-from cStringIO import StringIO
-import compiler
-import inspect
-import textwrap
-import tokenize
-
-from compiler_unparse import unparse
-
-
-class Comment(object):
-    """ A comment block.
-    """
-    is_comment = True
-    def __init__(self, start_lineno, end_lineno, text):
-        # int : The first line number in the block. 1-indexed.
-        self.start_lineno = start_lineno
-        # int : The last line number. Inclusive!
-        self.end_lineno = end_lineno
-        # str : The text block including '#' character but not any leading spaces.
-        self.text = text
-
-    def add(self, string, start, end, line):
-        """ Add a new comment line.
-        """
-        self.start_lineno = min(self.start_lineno, start[0])
-        self.end_lineno = max(self.end_lineno, end[0])
-        self.text += string
-
-    def __repr__(self):
-        return '%s(%r, %r, %r)' % (self.__class__.__name__, self.start_lineno,
-            self.end_lineno, self.text)
-
-
-class NonComment(object):
-    """ A non-comment block of code.
-    """
-    is_comment = False
-    def __init__(self, start_lineno, end_lineno):
-        self.start_lineno = start_lineno
-        self.end_lineno = end_lineno
-
-    def add(self, string, start, end, line):
-        """ Add lines to the block.
-        """
-        if string.strip():
-            # Only add if not entirely whitespace.
-            self.start_lineno = min(self.start_lineno, start[0])
-            self.end_lineno = max(self.end_lineno, end[0])
-
-    def __repr__(self):
-        return '%s(%r, %r)' % (self.__class__.__name__, self.start_lineno,
-            self.end_lineno)
-
-
-class CommentBlocker(object):
-    """ Pull out contiguous comment blocks.
-    """
-    def __init__(self):
-        # Start with a dummy.
-        self.current_block = NonComment(0, 0)
-
-        # All of the blocks seen so far.
-        self.blocks = []
-
-        # The index mapping lines of code to their associated comment blocks.
-        self.index = {}
-
-    def process_file(self, file):
-        """ Process a file object.
-        """
-        for token in tokenize.generate_tokens(file.next):
-            self.process_token(*token)
-        self.make_index()
-
-    def process_token(self, kind, string, start, end, line):
-        """ Process a single token.
-        """
-        if self.current_block.is_comment:
-            if kind == tokenize.COMMENT:
-                self.current_block.add(string, start, end, line)
-            else:
-                self.new_noncomment(start[0], end[0])
-        else:
-            if kind == tokenize.COMMENT:
-                self.new_comment(string, start, end, line)
-            else:
-                self.current_block.add(string, start, end, line)
-
-    def new_noncomment(self, start_lineno, end_lineno):
-        """ We are transitioning from a noncomment to a comment.
-        """
-        block = NonComment(start_lineno, end_lineno)
-        self.blocks.append(block)
-        self.current_block = block
-
-    def new_comment(self, string, start, end, line):
-        """ Possibly add a new comment.
-        
-        Only adds a new comment if this comment is the only thing on the line.
-        Otherwise, it extends the noncomment block.
-        """
-        prefix = line[:start[1]]
-        if prefix.strip():
-            # Oops! Trailing comment, not a comment block.
-            self.current_block.add(string, start, end, line)
-        else:
-            # A comment block.
-            block = Comment(start[0], end[0], string)
-            self.blocks.append(block)
-            self.current_block = block
-
-    def make_index(self):
-        """ Make the index mapping lines of actual code to their associated
-        prefix comments.
-        """
-        for prev, block in zip(self.blocks[:-1], self.blocks[1:]):
-            if not block.is_comment:
-                self.index[block.start_lineno] = prev
-
-    def search_for_comment(self, lineno, default=None):
-        """ Find the comment block just before the given line number.
-
-        Returns None (or the specified default) if there is no such block.
-        """
-        if not self.index:
-            self.make_index()
-        block = self.index.get(lineno, None)
-        text = getattr(block, 'text', default)
-        return text
-
-
-def strip_comment_marker(text):
-    """ Strip # markers at the front of a block of comment text.
-    """
-    lines = []
-    for line in text.splitlines():
-        lines.append(line.lstrip('#'))
-    text = textwrap.dedent('\n'.join(lines))
-    return text
-
-
-def get_class_traits(klass):
-    """ Yield all of the documentation for trait definitions on a class object.
-    """
-    # FIXME: gracefully handle errors here or in the caller?
-    source = inspect.getsource(klass)
-    cb = CommentBlocker()
-    cb.process_file(StringIO(source))
-    mod_ast = compiler.parse(source)
-    class_ast = mod_ast.node.nodes[0]
-    for node in class_ast.code.nodes:
-        # FIXME: handle other kinds of assignments?
-        if isinstance(node, compiler.ast.Assign):
-            name = node.nodes[0].name
-            rhs = unparse(node.expr).strip()
-            doc = strip_comment_marker(cb.search_for_comment(node.lineno, default=''))
-            yield name, rhs, doc
-

Copied: branches/dynamic_cpu_configuration/doc/sphinxext/comment_eater.py (from rev 6149, trunk/doc/sphinxext/comment_eater.py)

Deleted: branches/dynamic_cpu_configuration/doc/sphinxext/compiler_unparse.py
===================================================================
--- trunk/doc/sphinxext/compiler_unparse.py	2008-12-16 18:53:25 UTC (rev 6149)
+++ branches/dynamic_cpu_configuration/doc/sphinxext/compiler_unparse.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -1,860 +0,0 @@
-""" Turn compiler.ast structures back into executable python code.
-
-    The unparse method takes a compiler.ast tree and transforms it back into
-    valid python code.  It is incomplete and currently only works for
-    import statements, function calls, function definitions, assignments, and
-    basic expressions.
-
-    Inspired by python-2.5-svn/Demo/parser/unparse.py
-
-    fixme: We may want to move to using _ast trees because the compiler for
-           them is about 6 times faster than compiler.compile.
-"""
-
-import sys
-import cStringIO
-from compiler.ast import Const, Name, Tuple, Div, Mul, Sub, Add
-
-def unparse(ast, single_line_functions=False):
-    s = cStringIO.StringIO()
-    UnparseCompilerAst(ast, s, single_line_functions)
-    return s.getvalue().lstrip()
-
-op_precedence = { 'compiler.ast.Power':3, 'compiler.ast.Mul':2, 'compiler.ast.Div':2,
-                  'compiler.ast.Add':1, 'compiler.ast.Sub':1 }
-
-class UnparseCompilerAst:
-    """ Methods in this class recursively traverse an AST and
-        output source code for the abstract syntax; original formatting
-        is disregarged.
-    """
-
-    #########################################################################
-    # object interface.
-    #########################################################################
-
-    def __init__(self, tree, file = sys.stdout, single_line_functions=False):
-        """ Unparser(tree, file=sys.stdout) -> None.
-
-            Print the source for tree to file.
-        """
-        self.f = file
-        self._single_func = single_line_functions
-        self._do_indent = True
-        self._indent = 0
-        self._dispatch(tree)
-        self._write("\n")
-        self.f.flush()
-
-    #########################################################################
-    # Unparser private interface.
-    #########################################################################
-
-    ### format, output, and dispatch methods ################################
-
-    def _fill(self, text = ""):
-        "Indent a piece of text, according to the current indentation level"
-        if self._do_indent:
-            self._write("\n"+"    "*self._indent + text)
-        else:
-            self._write(text)
-
-    def _write(self, text):
-        "Append a piece of text to the current line."
-        self.f.write(text)
-
-    def _enter(self):
-        "Print ':', and increase the indentation."
-        self._write(": ")
-        self._indent += 1
-
-    def _leave(self):
-        "Decrease the indentation level."
-        self._indent -= 1
-
-    def _dispatch(self, tree):
-        "_dispatcher function, _dispatching tree type T to method _T."
-        if isinstance(tree, list):
-            for t in tree:
-                self._dispatch(t)
-            return
-        meth = getattr(self, "_"+tree.__class__.__name__)
-        if tree.__class__.__name__ == 'NoneType' and not self._do_indent:
-            return
-        meth(tree)
-
-
-    #########################################################################
-    # compiler.ast unparsing methods.
-    #
-    # There should be one method per concrete grammar type. They are
-    # organized in alphabetical order.
-    #########################################################################
-
-    def _Add(self, t):
-        self.__binary_op(t, '+')
-
-    def _And(self, t):
-        self._write(" (")
-        for i, node in enumerate(t.nodes):
-            self._dispatch(node)
-            if i != len(t.nodes)-1:
-                self._write(") and (")
-        self._write(")")
-               
-    def _AssAttr(self, t):
-        """ Handle assigning an attribute of an object
-        """
-        self._dispatch(t.expr)
-        self._write('.'+t.attrname)
- 
-    def _Assign(self, t):
-        """ Expression Assignment such as "a = 1".
-
-            This only handles assignment in expressions.  Keyword assignment
-            is handled separately.
-        """
-        self._fill()
-        for target in t.nodes:
-            self._dispatch(target)
-            self._write(" = ")
-        self._dispatch(t.expr)
-        if not self._do_indent:
-            self._write('; ')
-
-    def _AssName(self, t):
-        """ Name on left hand side of expression.
-
-            Treat just like a name on the right side of an expression.
-        """
-        self._Name(t)
-
-    def _AssTuple(self, t):
-        """ Tuple on left hand side of an expression.
-        """
-
-        # _write each elements, separated by a comma.
-        for element in t.nodes[:-1]:
-            self._dispatch(element)
-            self._write(", ")
-
-        # Handle the last one without writing comma
-        last_element = t.nodes[-1]
-        self._dispatch(last_element)
-
-    def _AugAssign(self, t):
-        """ +=,-=,*=,/=,**=, etc. operations
-        """
-        
-        self._fill()
-        self._dispatch(t.node)
-        self._write(' '+t.op+' ')
-        self._dispatch(t.expr)
-        if not self._do_indent:
-            self._write(';')
-            
-    def _Bitand(self, t):
-        """ Bit and operation.
-        """
-        
-        for i, node in enumerate(t.nodes):
-            self._write("(")
-            self._dispatch(node)
-            self._write(")")
-            if i != len(t.nodes)-1:
-                self._write(" & ")
-                
-    def _Bitor(self, t):
-        """ Bit or operation
-        """
-        
-        for i, node in enumerate(t.nodes):
-            self._write("(")
-            self._dispatch(node)
-            self._write(")")
-            if i != len(t.nodes)-1:
-                self._write(" | ")
-                
-    def _CallFunc(self, t):
-        """ Function call.
-        """
-        self._dispatch(t.node)
-        self._write("(")
-        comma = False
-        for e in t.args:
-            if comma: self._write(", ")
-            else: comma = True
-            self._dispatch(e)
-        if t.star_args:
-            if comma: self._write(", ")
-            else: comma = True
-            self._write("*")
-            self._dispatch(t.star_args)
-        if t.dstar_args:
-            if comma: self._write(", ")
-            else: comma = True
-            self._write("**")
-            self._dispatch(t.dstar_args)
-        self._write(")")
-
-    def _Compare(self, t):
-        self._dispatch(t.expr)
-        for op, expr in t.ops:
-            self._write(" " + op + " ")
-            self._dispatch(expr)
-
-    def _Const(self, t):
-        """ A constant value such as an integer value, 3, or a string, "hello".
-        """
-        self._dispatch(t.value)
-
-    def _Decorators(self, t):
-        """ Handle function decorators (eg. @has_units)
-        """
-        for node in t.nodes:
-            self._dispatch(node)
-
-    def _Dict(self, t):
-        self._write("{")
-        for  i, (k, v) in enumerate(t.items):
-            self._dispatch(k)
-            self._write(": ")
-            self._dispatch(v)
-            if i < len(t.items)-1:
-                self._write(", ")
-        self._write("}")
-
-    def _Discard(self, t):
-        """ Node for when return value is ignored such as in "foo(a)".
-        """
-        self._fill()
-        self._dispatch(t.expr)
-
-    def _Div(self, t):
-        self.__binary_op(t, '/')
-
-    def _Ellipsis(self, t):
-        self._write("...")
-
-    def _From(self, t):
-        """ Handle "from xyz import foo, bar as baz".
-        """
-        # fixme: Are From and ImportFrom handled differently?
-        self._fill("from ")
-        self._write(t.modname)
-        self._write(" import ")
-        for i, (name,asname) in enumerate(t.names):
-            if i != 0:
-                self._write(", ")
-            self._write(name)
-            if asname is not None:
-                self._write(" as "+asname)
-                
-    def _Function(self, t):
-        """ Handle function definitions
-        """
-        if t.decorators is not None:
-            self._fill("@")
-            self._dispatch(t.decorators)
-        self._fill("def "+t.name + "(")
-        defaults = [None] * (len(t.argnames) - len(t.defaults)) + list(t.defaults)
-        for i, arg in enumerate(zip(t.argnames, defaults)):
-            self._write(arg[0])
-            if arg[1] is not None:
-                self._write('=')
-                self._dispatch(arg[1])
-            if i < len(t.argnames)-1:
-                self._write(', ')
-        self._write(")")
-        if self._single_func:
-            self._do_indent = False
-        self._enter()
-        self._dispatch(t.code)
-        self._leave()
-        self._do_indent = True
-
-    def _Getattr(self, t):
-        """ Handle getting an attribute of an object
-        """
-        if isinstance(t.expr, (Div, Mul, Sub, Add)):
-            self._write('(')
-            self._dispatch(t.expr)
-            self._write(')')
-        else:
-            self._dispatch(t.expr)
-            
-        self._write('.'+t.attrname)
-        
-    def _If(self, t):
-        self._fill()
-        
-        for i, (compare,code) in enumerate(t.tests):
-            if i == 0:
-                self._write("if ")
-            else:
-                self._write("elif ")
-            self._dispatch(compare)
-            self._enter()
-            self._fill()
-            self._dispatch(code)
-            self._leave()
-            self._write("\n")
-
-        if t.else_ is not None:
-            self._write("else")
-            self._enter()
-            self._fill()
-            self._dispatch(t.else_)
-            self._leave()
-            self._write("\n")
-            
-    def _IfExp(self, t):
-        self._dispatch(t.then)
-        self._write(" if ")
-        self._dispatch(t.test)
-
-        if t.else_ is not None:
-            self._write(" else (")
-            self._dispatch(t.else_)
-            self._write(")")
-
-    def _Import(self, t):
-        """ Handle "import xyz.foo".
-        """
-        self._fill("import ")
-        
-        for i, (name,asname) in enumerate(t.names):
-            if i != 0:
-                self._write(", ")
-            self._write(name)
-            if asname is not None:
-                self._write(" as "+asname)
-
-    def _Keyword(self, t):
-        """ Keyword value assignment within function calls and definitions.
-        """
-        self._write(t.name)
-        self._write("=")
-        self._dispatch(t.expr)
-        
-    def _List(self, t):
-        self._write("[")
-        for  i,node in enumerate(t.nodes):
-            self._dispatch(node)
-            if i < len(t.nodes)-1:
-                self._write(", ")
-        self._write("]")
-
-    def _Module(self, t):
-        if t.doc is not None:
-            self._dispatch(t.doc)
-        self._dispatch(t.node)
-
-    def _Mul(self, t):
-        self.__binary_op(t, '*')
-
-    def _Name(self, t):
-        self._write(t.name)
-
-    def _NoneType(self, t):
-        self._write("None")
-        
-    def _Not(self, t):
-        self._write('not (')
-        self._dispatch(t.expr)
-        self._write(')')
-        
-    def _Or(self, t):
-        self._write(" (")
-        for i, node in enumerate(t.nodes):
-            self._dispatch(node)
-            if i != len(t.nodes)-1:
-                self._write(") or (")
-        self._write(")")
-                
-    def _Pass(self, t):
-        self._write("pass\n")
-
-    def _Printnl(self, t):
-        self._fill("print ")
-        if t.dest:
-            self._write(">> ")
-            self._dispatch(t.dest)
-            self._write(", ")
-        comma = False
-        for node in t.nodes:
-            if comma: self._write(', ')
-            else: comma = True
-            self._dispatch(node)
-
-    def _Power(self, t):
-        self.__binary_op(t, '**')
-
-    def _Return(self, t):
-        self._fill("return ")
-        if t.value:
-            if isinstance(t.value, Tuple):
-                text = ', '.join([ name.name for name in t.value.asList() ])
-                self._write(text)
-            else:
-                self._dispatch(t.value)
-            if not self._do_indent:
-                self._write('; ')
-
-    def _Slice(self, t):
-        self._dispatch(t.expr)
-        self._write("[")
-        if t.lower:
-            self._dispatch(t.lower)
-        self._write(":")
-        if t.upper:
-            self._dispatch(t.upper)
-        #if t.step:
-        #    self._write(":")
-        #    self._dispatch(t.step)
-        self._write("]")
-
-    def _Sliceobj(self, t):
-        for i, node in enumerate(t.nodes):
-            if i != 0:
-                self._write(":")
-            if not (isinstance(node, Const) and node.value is None):
-                self._dispatch(node)
-
-    def _Stmt(self, tree):
-        for node in tree.nodes:
-            self._dispatch(node)
-
-    def _Sub(self, t):
-        self.__binary_op(t, '-')
-
-    def _Subscript(self, t):
-        self._dispatch(t.expr)
-        self._write("[")
-        for i, value in enumerate(t.subs):
-            if i != 0:
-                self._write(",")
-            self._dispatch(value)
-        self._write("]")
-
-    def _TryExcept(self, t):
-        self._fill("try")
-        self._enter()
-        self._dispatch(t.body)
-        self._leave()
-
-        for handler in t.handlers:
-            self._fill('except ')
-            self._dispatch(handler[0])
-            if handler[1] is not None:
-                self._write(', ')
-                self._dispatch(handler[1])
-            self._enter()
-            self._dispatch(handler[2])
-            self._leave()
-            
-        if t.else_:
-            self._fill("else")
-            self._enter()
-            self._dispatch(t.else_)
-            self._leave()
-
-    def _Tuple(self, t):
-
-        if not t.nodes:
-            # Empty tuple.
-            self._write("()")
-        else:
-            self._write("(")
-
-            # _write each elements, separated by a comma.
-            for element in t.nodes[:-1]:
-                self._dispatch(element)
-                self._write(", ")
-
-            # Handle the last one without writing comma
-            last_element = t.nodes[-1]
-            self._dispatch(last_element)
-
-            self._write(")")
-            
-    def _UnaryAdd(self, t):
-        self._write("+")
-        self._dispatch(t.expr)
-        
-    def _UnarySub(self, t):
-        self._write("-")
-        self._dispatch(t.expr)        
-
-    def _With(self, t):
-        self._fill('with ')
-        self._dispatch(t.expr)
-        if t.vars:
-            self._write(' as ')
-            self._dispatch(t.vars.name)
-        self._enter()
-        self._dispatch(t.body)
-        self._leave()
-        self._write('\n')
-        
-    def _int(self, t):
-        self._write(repr(t))
-
-    def __binary_op(self, t, symbol):
-        # Check if parenthesis are needed on left side and then dispatch
-        has_paren = False
-        left_class = str(t.left.__class__)
-        if (left_class in op_precedence.keys() and
-            op_precedence[left_class] < op_precedence[str(t.__class__)]):
-            has_paren = True
-        if has_paren:
-            self._write('(')
-        self._dispatch(t.left)
-        if has_paren:
-            self._write(')')
-        # Write the appropriate symbol for operator
-        self._write(symbol)
-        # Check if parenthesis are needed on the right side and then dispatch
-        has_paren = False
-        right_class = str(t.right.__class__)
-        if (right_class in op_precedence.keys() and
-            op_precedence[right_class] < op_precedence[str(t.__class__)]):
-            has_paren = True
-        if has_paren:
-            self._write('(')
-        self._dispatch(t.right)
-        if has_paren:
-            self._write(')')
-
-    def _float(self, t):
-        # if t is 0.1, str(t)->'0.1' while repr(t)->'0.1000000000001'
-        # We prefer str here.
-        self._write(str(t))
-
-    def _str(self, t):
-        self._write(repr(t))
-        
-    def _tuple(self, t):
-        self._write(str(t))
-
-    #########################################################################
-    # These are the methods from the _ast modules unparse.
-    #
-    # As our needs to handle more advanced code increase, we may want to
-    # modify some of the methods below so that they work for compiler.ast.
-    #########################################################################
-
-#    # stmt
-#    def _Expr(self, tree):
-#        self._fill()
-#        self._dispatch(tree.value)
-#
-#    def _Import(self, t):
-#        self._fill("import ")
-#        first = True
-#        for a in t.names:
-#            if first:
-#                first = False
-#            else:
-#                self._write(", ")
-#            self._write(a.name)
-#            if a.asname:
-#                self._write(" as "+a.asname)
-#
-##    def _ImportFrom(self, t):
-##        self._fill("from ")
-##        self._write(t.module)
-##        self._write(" import ")
-##        for i, a in enumerate(t.names):
-##            if i == 0:
-##                self._write(", ")
-##            self._write(a.name)
-##            if a.asname:
-##                self._write(" as "+a.asname)
-##        # XXX(jpe) what is level for?
-##
-#
-#    def _Break(self, t):
-#        self._fill("break")
-#
-#    def _Continue(self, t):
-#        self._fill("continue")
-#
-#    def _Delete(self, t):
-#        self._fill("del ")
-#        self._dispatch(t.targets)
-#
-#    def _Assert(self, t):
-#        self._fill("assert ")
-#        self._dispatch(t.test)
-#        if t.msg:
-#            self._write(", ")
-#            self._dispatch(t.msg)
-#
-#    def _Exec(self, t):
-#        self._fill("exec ")
-#        self._dispatch(t.body)
-#        if t.globals:
-#            self._write(" in ")
-#            self._dispatch(t.globals)
-#        if t.locals:
-#            self._write(", ")
-#            self._dispatch(t.locals)
-#
-#    def _Print(self, t):
-#        self._fill("print ")
-#        do_comma = False
-#        if t.dest:
-#            self._write(">>")
-#            self._dispatch(t.dest)
-#            do_comma = True
-#        for e in t.values:
-#            if do_comma:self._write(", ")
-#            else:do_comma=True
-#            self._dispatch(e)
-#        if not t.nl:
-#            self._write(",")
-#
-#    def _Global(self, t):
-#        self._fill("global")
-#        for i, n in enumerate(t.names):
-#            if i != 0:
-#                self._write(",")
-#            self._write(" " + n)
-#
-#    def _Yield(self, t):
-#        self._fill("yield")
-#        if t.value:
-#            self._write(" (")
-#            self._dispatch(t.value)
-#            self._write(")")
-#
-#    def _Raise(self, t):
-#        self._fill('raise ')
-#        if t.type:
-#            self._dispatch(t.type)
-#        if t.inst:
-#            self._write(", ")
-#            self._dispatch(t.inst)
-#        if t.tback:
-#            self._write(", ")
-#            self._dispatch(t.tback)
-#
-#
-#    def _TryFinally(self, t):
-#        self._fill("try")
-#        self._enter()
-#        self._dispatch(t.body)
-#        self._leave()
-#
-#        self._fill("finally")
-#        self._enter()
-#        self._dispatch(t.finalbody)
-#        self._leave()
-#
-#    def _excepthandler(self, t):
-#        self._fill("except ")
-#        if t.type:
-#            self._dispatch(t.type)
-#        if t.name:
-#            self._write(", ")
-#            self._dispatch(t.name)
-#        self._enter()
-#        self._dispatch(t.body)
-#        self._leave()
-#
-#    def _ClassDef(self, t):
-#        self._write("\n")
-#        self._fill("class "+t.name)
-#        if t.bases:
-#            self._write("(")
-#            for a in t.bases:
-#                self._dispatch(a)
-#                self._write(", ")
-#            self._write(")")
-#        self._enter()
-#        self._dispatch(t.body)
-#        self._leave()
-#
-#    def _FunctionDef(self, t):
-#        self._write("\n")
-#        for deco in t.decorators:
-#            self._fill("@")
-#            self._dispatch(deco)
-#        self._fill("def "+t.name + "(")
-#        self._dispatch(t.args)
-#        self._write(")")
-#        self._enter()
-#        self._dispatch(t.body)
-#        self._leave()
-#
-#    def _For(self, t):
-#        self._fill("for ")
-#        self._dispatch(t.target)
-#        self._write(" in ")
-#        self._dispatch(t.iter)
-#        self._enter()
-#        self._dispatch(t.body)
-#        self._leave()
-#        if t.orelse:
-#            self._fill("else")
-#            self._enter()
-#            self._dispatch(t.orelse)
-#            self._leave
-#
-#    def _While(self, t):
-#        self._fill("while ")
-#        self._dispatch(t.test)
-#        self._enter()
-#        self._dispatch(t.body)
-#        self._leave()
-#        if t.orelse:
-#            self._fill("else")
-#            self._enter()
-#            self._dispatch(t.orelse)
-#            self._leave
-#
-#    # expr
-#    def _Str(self, tree):
-#        self._write(repr(tree.s))
-##
-#    def _Repr(self, t):
-#        self._write("`")
-#        self._dispatch(t.value)
-#        self._write("`")
-#
-#    def _Num(self, t):
-#        self._write(repr(t.n))
-#
-#    def _ListComp(self, t):
-#        self._write("[")
-#        self._dispatch(t.elt)
-#        for gen in t.generators:
-#            self._dispatch(gen)
-#        self._write("]")
-#
-#    def _GeneratorExp(self, t):
-#        self._write("(")
-#        self._dispatch(t.elt)
-#        for gen in t.generators:
-#            self._dispatch(gen)
-#        self._write(")")
-#
-#    def _comprehension(self, t):
-#        self._write(" for ")
-#        self._dispatch(t.target)
-#        self._write(" in ")
-#        self._dispatch(t.iter)
-#        for if_clause in t.ifs:
-#            self._write(" if ")
-#            self._dispatch(if_clause)
-#
-#    def _IfExp(self, t):
-#        self._dispatch(t.body)
-#        self._write(" if ")
-#        self._dispatch(t.test)
-#        if t.orelse:
-#            self._write(" else ")
-#            self._dispatch(t.orelse)
-#
-#    unop = {"Invert":"~", "Not": "not", "UAdd":"+", "USub":"-"}
-#    def _UnaryOp(self, t):
-#        self._write(self.unop[t.op.__class__.__name__])
-#        self._write("(")
-#        self._dispatch(t.operand)
-#        self._write(")")
-#
-#    binop = { "Add":"+", "Sub":"-", "Mult":"*", "Div":"/", "Mod":"%",
-#                    "LShift":">>", "RShift":"<<", "BitOr":"|", "BitXor":"^", "BitAnd":"&",
-#                    "FloorDiv":"//", "Pow": "**"}
-#    def _BinOp(self, t):
-#        self._write("(")
-#        self._dispatch(t.left)
-#        self._write(")" + self.binop[t.op.__class__.__name__] + "(")
-#        self._dispatch(t.right)
-#        self._write(")")
-#
-#    boolops = {_ast.And: 'and', _ast.Or: 'or'}
-#    def _BoolOp(self, t):
-#        self._write("(")
-#        self._dispatch(t.values[0])
-#        for v in t.values[1:]:
-#            self._write(" %s " % self.boolops[t.op.__class__])
-#            self._dispatch(v)
-#        self._write(")")
-#
-#    def _Attribute(self,t):
-#        self._dispatch(t.value)
-#        self._write(".")
-#        self._write(t.attr)
-#
-##    def _Call(self, t):
-##        self._dispatch(t.func)
-##        self._write("(")
-##        comma = False
-##        for e in t.args:
-##            if comma: self._write(", ")
-##            else: comma = True
-##            self._dispatch(e)
-##        for e in t.keywords:
-##            if comma: self._write(", ")
-##            else: comma = True
-##            self._dispatch(e)
-##        if t.starargs:
-##            if comma: self._write(", ")
-##            else: comma = True
-##            self._write("*")
-##            self._dispatch(t.starargs)
-##        if t.kwargs:
-##            if comma: self._write(", ")
-##            else: comma = True
-##            self._write("**")
-##            self._dispatch(t.kwargs)
-##        self._write(")")
-#
-#    # slice
-#    def _Index(self, t):
-#        self._dispatch(t.value)
-#
-#    def _ExtSlice(self, t):
-#        for i, d in enumerate(t.dims):
-#            if i != 0:
-#                self._write(': ')
-#            self._dispatch(d)
-#
-#    # others
-#    def _arguments(self, t):
-#        first = True
-#        nonDef = len(t.args)-len(t.defaults)
-#        for a in t.args[0:nonDef]:
-#            if first:first = False
-#            else: self._write(", ")
-#            self._dispatch(a)
-#        for a,d in zip(t.args[nonDef:], t.defaults):
-#            if first:first = False
-#            else: self._write(", ")
-#            self._dispatch(a),
-#            self._write("=")
-#            self._dispatch(d)
-#        if t.vararg:
-#            if first:first = False
-#            else: self._write(", ")
-#            self._write("*"+t.vararg)
-#        if t.kwarg:
-#            if first:first = False
-#            else: self._write(", ")
-#            self._write("**"+t.kwarg)
-#
-##    def _keyword(self, t):
-##        self._write(t.arg)
-##        self._write("=")
-##        self._dispatch(t.value)
-#
-#    def _Lambda(self, t):
-#        self._write("lambda ")
-#        self._dispatch(t.args)
-#        self._write(": ")
-#        self._dispatch(t.body)
-
-
-

Copied: branches/dynamic_cpu_configuration/doc/sphinxext/compiler_unparse.py (from rev 6149, trunk/doc/sphinxext/compiler_unparse.py)

Deleted: branches/dynamic_cpu_configuration/doc/sphinxext/docscrape.py
===================================================================
--- trunk/doc/sphinxext/docscrape.py	2008-12-16 18:53:25 UTC (rev 6149)
+++ branches/dynamic_cpu_configuration/doc/sphinxext/docscrape.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -1,492 +0,0 @@
-"""Extract reference documentation from the NumPy source tree.
-
-"""
-
-import inspect
-import textwrap
-import re
-import pydoc
-from StringIO import StringIO
-from warnings import warn
-4
-class Reader(object):
-    """A line-based string reader.
-
-    """
-    def __init__(self, data):
-        """
-        Parameters
-        ----------
-        data : str
-           String with lines separated by '\n'.
-
-        """
-        if isinstance(data,list):
-            self._str = data
-        else:
-            self._str = data.split('\n') # store string as list of lines
-
-        self.reset()
-
-    def __getitem__(self, n):
-        return self._str[n]
-
-    def reset(self):
-        self._l = 0 # current line nr
-
-    def read(self):
-        if not self.eof():
-            out = self[self._l]
-            self._l += 1
-            return out
-        else:
-            return ''
-
-    def seek_next_non_empty_line(self):
-        for l in self[self._l:]:
-            if l.strip():
-                break
-            else:
-                self._l += 1
-
-    def eof(self):
-        return self._l >= len(self._str)
-
-    def read_to_condition(self, condition_func):
-        start = self._l
-        for line in self[start:]:
-            if condition_func(line):
-                return self[start:self._l]
-            self._l += 1
-            if self.eof():
-                return self[start:self._l+1]
-        return []
-
-    def read_to_next_empty_line(self):
-        self.seek_next_non_empty_line()
-        def is_empty(line):
-            return not line.strip()
-        return self.read_to_condition(is_empty)
-
-    def read_to_next_unindented_line(self):
-        def is_unindented(line):
-            return (line.strip() and (len(line.lstrip()) == len(line)))
-        return self.read_to_condition(is_unindented)
-
-    def peek(self,n=0):
-        if self._l + n < len(self._str):
-            return self[self._l + n]
-        else:
-            return ''
-
-    def is_empty(self):
-        return not ''.join(self._str).strip()
-
-
-class NumpyDocString(object):
-    def __init__(self,docstring):
-        docstring = textwrap.dedent(docstring).split('\n')
-
-        self._doc = Reader(docstring)
-        self._parsed_data = {
-            'Signature': '',
-            'Summary': [''],
-            'Extended Summary': [],
-            'Parameters': [],
-            'Returns': [],
-            'Raises': [],
-            'Warns': [],
-            'Other Parameters': [],
-            'Attributes': [],
-            'Methods': [],
-            'See Also': [],
-            'Notes': [],
-            'Warnings': [],
-            'References': '',
-            'Examples': '',
-            'index': {}
-            }
-
-        self._parse()
-
-    def __getitem__(self,key):
-        return self._parsed_data[key]
-
-    def __setitem__(self,key,val):
-        if not self._parsed_data.has_key(key):
-            warn("Unknown section %s" % key)
-        else:
-            self._parsed_data[key] = val
-
-    def _is_at_section(self):
-        self._doc.seek_next_non_empty_line()
-
-        if self._doc.eof():
-            return False
-
-        l1 = self._doc.peek().strip()  # e.g. Parameters
-
-        if l1.startswith('.. index::'):
-            return True
-
-        l2 = self._doc.peek(1).strip() #    ---------- or ==========
-        return l2.startswith('-'*len(l1)) or l2.startswith('='*len(l1))
-
-    def _strip(self,doc):
-        i = 0
-        j = 0
-        for i,line in enumerate(doc):
-            if line.strip(): break
-
-        for j,line in enumerate(doc[::-1]):
-            if line.strip(): break
-
-        return doc[i:len(doc)-j]
-
-    def _read_to_next_section(self):
-        section = self._doc.read_to_next_empty_line()
-
-        while not self._is_at_section() and not self._doc.eof():
-            if not self._doc.peek(-1).strip(): # previous line was empty
-                section += ['']
-
-            section += self._doc.read_to_next_empty_line()
-
-        return section
-
-    def _read_sections(self):
-        while not self._doc.eof():
-            data = self._read_to_next_section()
-            name = data[0].strip()
-
-            if name.startswith('..'): # index section
-                yield name, data[1:]
-            elif len(data) < 2:
-                yield StopIteration
-            else:
-                yield name, self._strip(data[2:])
-
-    def _parse_param_list(self,content):
-        r = Reader(content)
-        params = []
-        while not r.eof():
-            header = r.read().strip()
-            if ' : ' in header:
-                arg_name, arg_type = header.split(' : ')[:2]
-            else:
-                arg_name, arg_type = header, ''
-
-            desc = r.read_to_next_unindented_line()
-            desc = dedent_lines(desc)
-
-            params.append((arg_name,arg_type,desc))
-
-        return params
-
-    
-    _name_rgx = re.compile(r"^\s*(:(?P<role>\w+):`(?P<name>[a-zA-Z0-9_.-]+)`|"
-                           r" (?P<name2>[a-zA-Z0-9_.-]+))\s*", re.X)
-    def _parse_see_also(self, content):
-        """
-        func_name : Descriptive text
-            continued text
-        another_func_name : Descriptive text
-        func_name1, func_name2, :meth:`func_name`, func_name3
-
-        """
-        items = []
-
-        def parse_item_name(text):
-            """Match ':role:`name`' or 'name'"""
-            m = self._name_rgx.match(text)
-            if m:
-                g = m.groups()
-                if g[1] is None:
-                    return g[3], None
-                else:
-                    return g[2], g[1]
-            raise ValueError("%s is not a item name" % text)
-
-        def push_item(name, rest):
-            if not name:
-                return
-            name, role = parse_item_name(name)
-            items.append((name, list(rest), role))
-            del rest[:]
-
-        current_func = None
-        rest = []
-        
-        for line in content:
-            if not line.strip(): continue
-
-            m = self._name_rgx.match(line)
-            if m and line[m.end():].strip().startswith(':'):
-                push_item(current_func, rest)
-                current_func, line = line[:m.end()], line[m.end():]
-                rest = [line.split(':', 1)[1].strip()]
-                if not rest[0]:
-                    rest = []
-            elif not line.startswith(' '):
-                push_item(current_func, rest)
-                current_func = None
-                if ',' in line:
-                    for func in line.split(','):
-                        push_item(func, [])
-                elif line.strip():
-                    current_func = line
-            elif current_func is not None:
-                rest.append(line.strip())
-        push_item(current_func, rest)
-        return items
-
-    def _parse_index(self, section, content):
-        """
-        .. index: default
-           :refguide: something, else, and more
-
-        """
-        def strip_each_in(lst):
-            return [s.strip() for s in lst]
-
-        out = {}
-        section = section.split('::')
-        if len(section) > 1:
-            out['default'] = strip_each_in(section[1].split(','))[0]
-        for line in content:
-            line = line.split(':')
-            if len(line) > 2:
-                out[line[1]] = strip_each_in(line[2].split(','))
-        return out
-    
-    def _parse_summary(self):
-        """Grab signature (if given) and summary"""
-        if self._is_at_section():
-            return
-
-        summary = self._doc.read_to_next_empty_line()
-        summary_str = " ".join([s.strip() for s in summary]).strip()
-        if re.compile('^([\w., ]+=)?\s*[\w\.]+\(.*\)$').match(summary_str):
-            self['Signature'] = summary_str
-            if not self._is_at_section():
-                self['Summary'] = self._doc.read_to_next_empty_line()
-        else:
-            self['Summary'] = summary
-
-        if not self._is_at_section():
-            self['Extended Summary'] = self._read_to_next_section()
-    
-    def _parse(self):
-        self._doc.reset()
-        self._parse_summary()
-
-        for (section,content) in self._read_sections():
-            if not section.startswith('..'):
-                section = ' '.join([s.capitalize() for s in section.split(' ')])
-            if section in ('Parameters', 'Attributes', 'Methods',
-                           'Returns', 'Raises', 'Warns'):
-                self[section] = self._parse_param_list(content)
-            elif section.startswith('.. index::'):
-                self['index'] = self._parse_index(section, content)
-            elif section == 'See Also':
-                self['See Also'] = self._parse_see_also(content)
-            else:
-                self[section] = content
-
-    # string conversion routines
-
-    def _str_header(self, name, symbol='-'):
-        return [name, len(name)*symbol]
-
-    def _str_indent(self, doc, indent=4):
-        out = []
-        for line in doc:
-            out += [' '*indent + line]
-        return out
-
-    def _str_signature(self):
-        if self['Signature']:
-            return [self['Signature'].replace('*','\*')] + ['']
-        else:
-            return ['']
-
-    def _str_summary(self):
-        if self['Summary']:
-            return self['Summary'] + ['']
-        else:
-            return []
-
-    def _str_extended_summary(self):
-        if self['Extended Summary']:
-            return self['Extended Summary'] + ['']
-        else:
-            return []
-
-    def _str_param_list(self, name):
-        out = []
-        if self[name]:
-            out += self._str_header(name)
-            for param,param_type,desc in self[name]:
-                out += ['%s : %s' % (param, param_type)]
-                out += self._str_indent(desc)
-            out += ['']
-        return out
-
-    def _str_section(self, name):
-        out = []
-        if self[name]:
-            out += self._str_header(name)
-            out += self[name]
-            out += ['']
-        return out
-
-    def _str_see_also(self, func_role):
-        if not self['See Also']: return []
-        out = []
-        out += self._str_header("See Also")
-        last_had_desc = True
-        for func, desc, role in self['See Also']:
-            if role:
-                link = ':%s:`%s`' % (role, func)
-            elif func_role:
-                link = ':%s:`%s`' % (func_role, func)
-            else:
-                link = "`%s`_" % func
-            if desc or last_had_desc:
-                out += ['']
-                out += [link]
-            else:
-                out[-1] += ", %s" % link
-            if desc:
-                out += self._str_indent([' '.join(desc)])
-                last_had_desc = True
-            else:
-                last_had_desc = False
-        out += ['']
-        return out
-
-    def _str_index(self):
-        idx = self['index']
-        out = []
-        out += ['.. index:: %s' % idx.get('default','')]
-        for section, references in idx.iteritems():
-            if section == 'default':
-                continue
-            out += ['   :%s: %s' % (section, ', '.join(references))]
-        return out
-
-    def __str__(self, func_role=''):
-        out = []
-        out += self._str_signature()
-        out += self._str_summary()
-        out += self._str_extended_summary()
-        for param_list in ('Parameters','Returns','Raises'):
-            out += self._str_param_list(param_list)
-        out += self._str_section('Warnings')
-        out += self._str_see_also(func_role)
-        for s in ('Notes','References','Examples'):
-            out += self._str_section(s)
-        out += self._str_index()
-        return '\n'.join(out)
-
-
-def indent(str,indent=4):
-    indent_str = ' '*indent
-    if str is None:
-        return indent_str
-    lines = str.split('\n')
-    return '\n'.join(indent_str + l for l in lines)
-
-def dedent_lines(lines):
-    """Deindent a list of lines maximally"""
-    return textwrap.dedent("\n".join(lines)).split("\n")
-
-def header(text, style='-'):
-    return text + '\n' + style*len(text) + '\n'
-
-
-class FunctionDoc(NumpyDocString):
-    def __init__(self, func, role='func'):
-        self._f = func
-        self._role = role # e.g. "func" or "meth"
-        try:
-            NumpyDocString.__init__(self,inspect.getdoc(func) or '')
-        except ValueError, e:
-            print '*'*78
-            print "ERROR: '%s' while parsing `%s`" % (e, self._f)
-            print '*'*78
-            #print "Docstring follows:"
-            #print doclines
-            #print '='*78
-
-        if not self['Signature']:
-            func, func_name = self.get_func()
-            try:
-                # try to read signature
-                argspec = inspect.getargspec(func)
-                argspec = inspect.formatargspec(*argspec)
-                argspec = argspec.replace('*','\*')
-                signature = '%s%s' % (func_name, argspec)
-            except TypeError, e:
-                signature = '%s()' % func_name
-            self['Signature'] = signature
-
-    def get_func(self):
-        func_name = getattr(self._f, '__name__', self.__class__.__name__)
-        if inspect.isclass(self._f):
-            func = getattr(self._f, '__call__', self._f.__init__)
-        else:
-            func = self._f
-        return func, func_name
-            
-    def __str__(self):
-        out = ''
-
-        func, func_name = self.get_func()
-        signature = self['Signature'].replace('*', '\*')
-
-        roles = {'func': 'function',
-                 'meth': 'method'}
-
-        if self._role:
-            if not roles.has_key(self._role):
-                print "Warning: invalid role %s" % self._role
-            out += '.. %s:: %s\n    \n\n' % (roles.get(self._role,''),
-                                             func_name)
-
-        out += super(FunctionDoc, self).__str__(func_role=self._role)
-        return out
-
-
-class ClassDoc(NumpyDocString):
-    def __init__(self,cls,modulename='',func_doc=FunctionDoc):
-        if not inspect.isclass(cls):
-            raise ValueError("Initialise using a class. Got %r" % cls)
-        self._cls = cls
-
-        if modulename and not modulename.endswith('.'):
-            modulename += '.'
-        self._mod = modulename
-        self._name = cls.__name__
-        self._func_doc = func_doc
-
-        NumpyDocString.__init__(self, pydoc.getdoc(cls))
-
-    @property
-    def methods(self):
-        return [name for name,func in inspect.getmembers(self._cls)
-                if not name.startswith('_') and callable(func)]
-
-    def __str__(self):
-        out = ''
-        out += super(ClassDoc, self).__str__()
-        out += "\n\n"
-
-        #for m in self.methods:
-        #    print "Parsing `%s`" % m
-        #    out += str(self._func_doc(getattr(self._cls,m), 'meth')) + '\n\n'
-        #    out += '.. index::\n   single: %s; %s\n\n' % (self._name, m)
-
-        return out
-
-

Copied: branches/dynamic_cpu_configuration/doc/sphinxext/docscrape.py (from rev 6149, trunk/doc/sphinxext/docscrape.py)

Deleted: branches/dynamic_cpu_configuration/doc/sphinxext/docscrape_sphinx.py
===================================================================
--- trunk/doc/sphinxext/docscrape_sphinx.py	2008-12-16 18:53:25 UTC (rev 6149)
+++ branches/dynamic_cpu_configuration/doc/sphinxext/docscrape_sphinx.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -1,133 +0,0 @@
-import re, inspect, textwrap, pydoc
-from docscrape import NumpyDocString, FunctionDoc, ClassDoc
-
-class SphinxDocString(NumpyDocString):
-    # string conversion routines
-    def _str_header(self, name, symbol='`'):
-        return ['.. rubric:: ' + name, '']
-
-    def _str_field_list(self, name):
-        return [':' + name + ':']
-
-    def _str_indent(self, doc, indent=4):
-        out = []
-        for line in doc:
-            out += [' '*indent + line]
-        return out
-
-    def _str_signature(self):
-        return ['']
-        if self['Signature']:
-            return ['``%s``' % self['Signature']] + ['']
-        else:
-            return ['']
-
-    def _str_summary(self):
-        return self['Summary'] + ['']
-
-    def _str_extended_summary(self):
-        return self['Extended Summary'] + ['']
-
-    def _str_param_list(self, name):
-        out = []
-        if self[name]:
-            out += self._str_field_list(name)
-            out += ['']
-            for param,param_type,desc in self[name]:
-                out += self._str_indent(['**%s** : %s' % (param.strip(),
-                                                          param_type)])
-                out += ['']
-                out += self._str_indent(desc,8)
-                out += ['']
-        return out
-
-    def _str_section(self, name):
-        out = []
-        if self[name]:
-            out += self._str_header(name)
-            out += ['']
-            content = textwrap.dedent("\n".join(self[name])).split("\n")
-            out += content
-            out += ['']
-        return out
-
-    def _str_see_also(self, func_role):
-        out = []
-        if self['See Also']:
-            see_also = super(SphinxDocString, self)._str_see_also(func_role)
-            out = ['.. seealso::', '']
-            out += self._str_indent(see_also[2:])
-        return out
-
-    def _str_warnings(self):
-        out = []
-        if self['Warnings']:
-            out = ['.. warning::', '']
-            out += self._str_indent(self['Warnings'])
-        return out
-
-    def _str_index(self):
-        idx = self['index']
-        out = []
-        if len(idx) == 0:
-            return out
-
-        out += ['.. index:: %s' % idx.get('default','')]
-        for section, references in idx.iteritems():
-            if section == 'default':
-                continue
-            elif section == 'refguide':
-                out += ['   single: %s' % (', '.join(references))]
-            else:
-                out += ['   %s: %s' % (section, ','.join(references))]
-        return out
-
-    def _str_references(self):
-        out = []
-        if self['References']:
-            out += self._str_header('References')
-            if isinstance(self['References'], str):
-                self['References'] = [self['References']]
-            out.extend(self['References'])
-            out += ['']
-        return out
-
-    def __str__(self, indent=0, func_role="obj"):
-        out = []
-        out += self._str_signature()
-        out += self._str_index() + ['']
-        out += self._str_summary()
-        out += self._str_extended_summary()
-        for param_list in ('Parameters', 'Attributes', 'Methods',
-                           'Returns','Raises'):
-            out += self._str_param_list(param_list)
-        out += self._str_warnings()
-        out += self._str_see_also(func_role)
-        out += self._str_section('Notes')
-        out += self._str_references()
-        out += self._str_section('Examples')
-        out = self._str_indent(out,indent)
-        return '\n'.join(out)
-
-class SphinxFunctionDoc(SphinxDocString, FunctionDoc):
-    pass
-
-class SphinxClassDoc(SphinxDocString, ClassDoc):
-    pass
-
-def get_doc_object(obj, what=None):
-    if what is None:
-        if inspect.isclass(obj):
-            what = 'class'
-        elif inspect.ismodule(obj):
-            what = 'module'
-        elif callable(obj):
-            what = 'function'
-        else:
-            what = 'object'
-    if what == 'class':
-        return SphinxClassDoc(obj, '', func_doc=SphinxFunctionDoc)
-    elif what in ('function', 'method'):
-        return SphinxFunctionDoc(obj, '')
-    else:
-        return SphinxDocString(pydoc.getdoc(obj))

Copied: branches/dynamic_cpu_configuration/doc/sphinxext/docscrape_sphinx.py (from rev 6149, trunk/doc/sphinxext/docscrape_sphinx.py)

Deleted: branches/dynamic_cpu_configuration/doc/sphinxext/numpydoc.py
===================================================================
--- trunk/doc/sphinxext/numpydoc.py	2008-12-16 18:53:25 UTC (rev 6149)
+++ branches/dynamic_cpu_configuration/doc/sphinxext/numpydoc.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -1,116 +0,0 @@
-"""
-========
-numpydoc
-========
-
-Sphinx extension that handles docstrings in the Numpy standard format. [1]
-
-It will:
-
-- Convert Parameters etc. sections to field lists.
-- Convert See Also section to a See also entry.
-- Renumber references.
-- Extract the signature from the docstring, if it can't be determined otherwise.
-
-.. [1] http://projects.scipy.org/scipy/numpy/wiki/CodingStyleGuidelines#docstring-standard
-
-"""
-
-import os, re, pydoc
-from docscrape_sphinx import get_doc_object, SphinxDocString
-import inspect
-
-def mangle_docstrings(app, what, name, obj, options, lines,
-                      reference_offset=[0]):
-    if what == 'module':
-        # Strip top title
-        title_re = re.compile(r'^\s*[#*=]{4,}\n[a-z0-9 -]+\n[#*=]{4,}\s*',
-                              re.I|re.S)
-        lines[:] = title_re.sub('', "\n".join(lines)).split("\n")
-    else:
-        doc = get_doc_object(obj, what)
-        lines[:] = str(doc).split("\n")
-
-    if app.config.numpydoc_edit_link and hasattr(obj, '__name__') and \
-           obj.__name__:
-        if hasattr(obj, '__module__'):
-            v = dict(full_name="%s.%s" % (obj.__module__, obj.__name__))
-        else:
-            v = dict(full_name=obj.__name__)
-        lines += ['', '.. htmlonly::', '']
-        lines += ['    %s' % x for x in
-                  (app.config.numpydoc_edit_link % v).split("\n")]
-
-    # replace reference numbers so that there are no duplicates
-    references = []
-    for l in lines:
-        l = l.strip()
-        if l.startswith('.. ['):
-            try:
-                references.append(int(l[len('.. ['):l.index(']')]))
-            except ValueError:
-                print "WARNING: invalid reference in %s docstring" % name
-
-    # Start renaming from the biggest number, otherwise we may
-    # overwrite references.
-    references.sort()
-    if references:
-        for i, line in enumerate(lines):
-            for r in references:
-                new_r = reference_offset[0] + r
-                lines[i] = lines[i].replace('[%d]_' % r,
-                                            '[%d]_' % new_r)
-                lines[i] = lines[i].replace('.. [%d]' % r,
-                                            '.. [%d]' % new_r)
-
-    reference_offset[0] += len(references)
-
-def mangle_signature(app, what, name, obj, options, sig, retann):
-    # Do not try to inspect classes that don't define `__init__`
-    if (inspect.isclass(obj) and
-        'initializes x; see ' in pydoc.getdoc(obj.__init__)):
-        return '', ''
-
-    if not (callable(obj) or hasattr(obj, '__argspec_is_invalid_')): return
-    if not hasattr(obj, '__doc__'): return
-
-    doc = SphinxDocString(pydoc.getdoc(obj))
-    if doc['Signature']:
-        sig = re.sub("^[^(]*", "", doc['Signature'])
-        return sig, ''
-
-def initialize(app):
-    try:
-        app.connect('autodoc-process-signature', mangle_signature)
-    except:
-        monkeypatch_sphinx_ext_autodoc()
-
-def setup(app, get_doc_object_=get_doc_object):
-    global get_doc_object
-    get_doc_object = get_doc_object_
-    
-    app.connect('autodoc-process-docstring', mangle_docstrings)
-    app.connect('builder-inited', initialize)
-    app.add_config_value('numpydoc_edit_link', None, True)
-
-#------------------------------------------------------------------------------
-# Monkeypatch sphinx.ext.autodoc to accept argspecless autodocs (Sphinx < 0.5)
-#------------------------------------------------------------------------------
-
-def monkeypatch_sphinx_ext_autodoc():
-    global _original_format_signature
-    import sphinx.ext.autodoc
-
-    if sphinx.ext.autodoc.format_signature is our_format_signature:
-        return
-
-    print "[numpydoc] Monkeypatching sphinx.ext.autodoc ..."
-    _original_format_signature = sphinx.ext.autodoc.format_signature
-    sphinx.ext.autodoc.format_signature = our_format_signature
-
-def our_format_signature(what, obj):
-    r = mangle_signature(None, what, None, obj, None, None, None)
-    if r is not None:
-        return r[0]
-    else:
-        return _original_format_signature(what, obj)

Copied: branches/dynamic_cpu_configuration/doc/sphinxext/numpydoc.py (from rev 6149, trunk/doc/sphinxext/numpydoc.py)

Deleted: branches/dynamic_cpu_configuration/doc/sphinxext/only_directives.py
===================================================================
--- trunk/doc/sphinxext/only_directives.py	2008-12-16 18:53:25 UTC (rev 6149)
+++ branches/dynamic_cpu_configuration/doc/sphinxext/only_directives.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -1,87 +0,0 @@
-#
-# A pair of directives for inserting content that will only appear in
-# either html or latex.
-#
-
-from docutils.nodes import Body, Element
-from docutils.writers.html4css1 import HTMLTranslator
-from sphinx.latexwriter import LaTeXTranslator
-from docutils.parsers.rst import directives
-
-class html_only(Body, Element):
-    pass
-
-class latex_only(Body, Element):
-    pass
-
-def run(content, node_class, state, content_offset):
-    text = '\n'.join(content)
-    node = node_class(text)
-    state.nested_parse(content, content_offset, node)
-    return [node]
-
-try:
-    from docutils.parsers.rst import Directive
-except ImportError:
-    from docutils.parsers.rst.directives import _directives
-
-    def html_only_directive(name, arguments, options, content, lineno,
-                            content_offset, block_text, state, state_machine):
-        return run(content, html_only, state, content_offset)
-
-    def latex_only_directive(name, arguments, options, content, lineno,
-                             content_offset, block_text, state, state_machine):
-        return run(content, latex_only, state, content_offset)
-
-    for func in (html_only_directive, latex_only_directive):
-        func.content = 1
-        func.options = {}
-        func.arguments = None
-
-    _directives['htmlonly'] = html_only_directive
-    _directives['latexonly'] = latex_only_directive
-else:
-    class OnlyDirective(Directive):
-        has_content = True
-        required_arguments = 0
-        optional_arguments = 0
-        final_argument_whitespace = True
-        option_spec = {}
-
-        def run(self):
-            self.assert_has_content()
-            return run(self.content, self.node_class,
-                       self.state, self.content_offset)
-
-    class HtmlOnlyDirective(OnlyDirective):
-        node_class = html_only
-
-    class LatexOnlyDirective(OnlyDirective):
-        node_class = latex_only
-
-    directives.register_directive('htmlonly', HtmlOnlyDirective)
-    directives.register_directive('latexonly', LatexOnlyDirective)
-
-def setup(app):
-    app.add_node(html_only)
-    app.add_node(latex_only)
-
-    # Add visit/depart methods to HTML-Translator:
-    def visit_perform(self, node):
-        pass
-    def depart_perform(self, node):
-        pass
-    def visit_ignore(self, node):
-        node.children = []
-    def depart_ignore(self, node):
-        node.children = []
-
-    HTMLTranslator.visit_html_only = visit_perform
-    HTMLTranslator.depart_html_only = depart_perform
-    HTMLTranslator.visit_latex_only = visit_ignore
-    HTMLTranslator.depart_latex_only = depart_ignore
-
-    LaTeXTranslator.visit_html_only = visit_ignore
-    LaTeXTranslator.depart_html_only = depart_ignore
-    LaTeXTranslator.visit_latex_only = visit_perform
-    LaTeXTranslator.depart_latex_only = depart_perform

Copied: branches/dynamic_cpu_configuration/doc/sphinxext/only_directives.py (from rev 6149, trunk/doc/sphinxext/only_directives.py)

Deleted: branches/dynamic_cpu_configuration/doc/sphinxext/phantom_import.py
===================================================================
--- trunk/doc/sphinxext/phantom_import.py	2008-12-16 18:53:25 UTC (rev 6149)
+++ branches/dynamic_cpu_configuration/doc/sphinxext/phantom_import.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -1,162 +0,0 @@
-"""
-==============
-phantom_import
-==============
-
-Sphinx extension to make directives from ``sphinx.ext.autodoc`` and similar
-extensions to use docstrings loaded from an XML file.
-
-This extension loads an XML file in the Pydocweb format [1] and
-creates a dummy module that contains the specified docstrings. This
-can be used to get the current docstrings from a Pydocweb instance
-without needing to rebuild the documented module.
-
-.. [1] http://code.google.com/p/pydocweb
-
-"""
-import imp, sys, compiler, types, os, inspect, re
-
-def setup(app):
-    app.connect('builder-inited', initialize)
-    app.add_config_value('phantom_import_file', None, True)
-
-def initialize(app):
-    fn = app.config.phantom_import_file
-    if (fn and os.path.isfile(fn)):
-        print "[numpydoc] Phantom importing modules from", fn, "..."
-        import_phantom_module(fn)
-
-#------------------------------------------------------------------------------
-# Creating 'phantom' modules from an XML description
-#------------------------------------------------------------------------------
-def import_phantom_module(xml_file):
-    """
-    Insert a fake Python module to sys.modules, based on a XML file.
-
-    The XML file is expected to conform to Pydocweb DTD. The fake
-    module will contain dummy objects, which guarantee the following:
-
-    - Docstrings are correct.
-    - Class inheritance relationships are correct (if present in XML).
-    - Function argspec is *NOT* correct (even if present in XML).
-      Instead, the function signature is prepended to the function docstring.
-    - Class attributes are *NOT* correct; instead, they are dummy objects.
-
-    Parameters
-    ----------
-    xml_file : str
-        Name of an XML file to read
-    
-    """
-    import lxml.etree as etree
-
-    object_cache = {}
-
-    tree = etree.parse(xml_file)
-    root = tree.getroot()
-
-    # Sort items so that
-    # - Base classes come before classes inherited from them
-    # - Modules come before their contents
-    all_nodes = dict([(n.attrib['id'], n) for n in root])
-    
-    def _get_bases(node, recurse=False):
-        bases = [x.attrib['ref'] for x in node.findall('base')]
-        if recurse:
-            j = 0
-            while True:
-                try:
-                    b = bases[j]
-                except IndexError: break
-                if b in all_nodes:
-                    bases.extend(_get_bases(all_nodes[b]))
-                j += 1
-        return bases
-
-    type_index = ['module', 'class', 'callable', 'object']
-    
-    def base_cmp(a, b):
-        x = cmp(type_index.index(a.tag), type_index.index(b.tag))
-        if x != 0: return x
-
-        if a.tag == 'class' and b.tag == 'class':
-            a_bases = _get_bases(a, recurse=True)
-            b_bases = _get_bases(b, recurse=True)
-            x = cmp(len(a_bases), len(b_bases))
-            if x != 0: return x
-            if a.attrib['id'] in b_bases: return -1
-            if b.attrib['id'] in a_bases: return 1
-        
-        return cmp(a.attrib['id'].count('.'), b.attrib['id'].count('.'))
-
-    nodes = root.getchildren()
-    nodes.sort(base_cmp)
-
-    # Create phantom items
-    for node in nodes:
-        name = node.attrib['id']
-        doc = (node.text or '').decode('string-escape') + "\n"
-        if doc == "\n": doc = ""
-
-        # create parent, if missing
-        parent = name
-        while True:
-            parent = '.'.join(parent.split('.')[:-1])
-            if not parent: break
-            if parent in object_cache: break
-            obj = imp.new_module(parent)
-            object_cache[parent] = obj
-            sys.modules[parent] = obj
-
-        # create object
-        if node.tag == 'module':
-            obj = imp.new_module(name)
-            obj.__doc__ = doc
-            sys.modules[name] = obj
-        elif node.tag == 'class':
-            bases = [object_cache[b] for b in _get_bases(node)
-                     if b in object_cache]
-            bases.append(object)
-            init = lambda self: None
-            init.__doc__ = doc
-            obj = type(name, tuple(bases), {'__doc__': doc, '__init__': init})
-            obj.__name__ = name.split('.')[-1]
-        elif node.tag == 'callable':
-            funcname = node.attrib['id'].split('.')[-1]
-            argspec = node.attrib.get('argspec')
-            if argspec:
-                argspec = re.sub('^[^(]*', '', argspec)
-                doc = "%s%s\n\n%s" % (funcname, argspec, doc)
-            obj = lambda: 0
-            obj.__argspec_is_invalid_ = True
-            obj.func_name = funcname
-            obj.__name__ = name
-            obj.__doc__ = doc
-            if inspect.isclass(object_cache[parent]):
-                obj.__objclass__ = object_cache[parent]
-        else:
-            class Dummy(object): pass
-            obj = Dummy()
-            obj.__name__ = name
-            obj.__doc__ = doc
-            if inspect.isclass(object_cache[parent]):
-                obj.__get__ = lambda: None
-        object_cache[name] = obj
-
-        if parent:
-            if inspect.ismodule(object_cache[parent]):
-                obj.__module__ = parent
-                setattr(object_cache[parent], name.split('.')[-1], obj)
-
-    # Populate items
-    for node in root:
-        obj = object_cache.get(node.attrib['id'])
-        if obj is None: continue
-        for ref in node.findall('ref'):
-            if node.tag == 'class':
-                if ref.attrib['ref'].startswith(node.attrib['id'] + '.'):
-                    setattr(obj, ref.attrib['name'],
-                            object_cache.get(ref.attrib['ref']))
-            else:
-                setattr(obj, ref.attrib['name'],
-                        object_cache.get(ref.attrib['ref']))

Copied: branches/dynamic_cpu_configuration/doc/sphinxext/phantom_import.py (from rev 6149, trunk/doc/sphinxext/phantom_import.py)

Deleted: branches/dynamic_cpu_configuration/doc/sphinxext/plot_directive.py
===================================================================
--- trunk/doc/sphinxext/plot_directive.py	2008-12-16 18:53:25 UTC (rev 6149)
+++ branches/dynamic_cpu_configuration/doc/sphinxext/plot_directive.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -1,452 +0,0 @@
-"""
-A special directive for generating a matplotlib plot.
-
-.. warning::
-
-   This is a hacked version of plot_directive.py from Matplotlib.
-   It's very much subject to change!
-
-Usage
------
-
-Can be used like this::
-
-    .. plot:: examples/example.py
-
-    .. plot::
-
-       import matplotlib.pyplot as plt
-       plt.plot([1,2,3], [4,5,6])
-
-    .. plot::
-
-       A plotting example:
-
-       >>> import matplotlib.pyplot as plt
-       >>> plt.plot([1,2,3], [4,5,6])
-
-The content is interpreted as doctest formatted if it has a line starting
-with ``>>>``.
-
-The ``plot`` directive supports the options
-
-    format : {'python', 'doctest'}
-        Specify the format of the input
-    include-source : bool
-        Whether to display the source code. Default can be changed in conf.py
-    
-and the ``image`` directive options ``alt``, ``height``, ``width``,
-``scale``, ``align``, ``class``.
-
-Configuration options
----------------------
-
-The plot directive has the following configuration options:
-
-    plot_output_dir
-        Directory (relative to config file) where to store plot output.
-        Should be inside the static directory. (Default: 'static')
-
-    plot_pre_code
-        Code that should be executed before each plot.
-
-    plot_rcparams
-        Dictionary of Matplotlib rc-parameter overrides.
-        Has 'sane' defaults.
-
-    plot_include_source
-        Default value for the include-source option
-
-
-TODO
-----
-
-* Don't put temp files to _static directory, but do function in the way
-  the pngmath directive works, and plot figures only during output writing.
-
-* Refactor Latex output; now it's plain images, but it would be nice
-  to make them appear side-by-side, or in floats.
-
-"""
-
-import sys, os, glob, shutil, imp, warnings, cStringIO, re, textwrap
-
-def setup(app):
-    setup.app = app
-    setup.config = app.config
-    setup.confdir = app.confdir
-    
-    app.add_config_value('plot_output_dir', '_static', True)
-    app.add_config_value('plot_pre_code', '', True)
-    app.add_config_value('plot_rcparams', sane_rcparameters, True)
-    app.add_config_value('plot_include_source', False, True)
-
-    app.add_directive('plot', plot_directive, True, (0, 1, False),
-                      **plot_directive_options)
-
-sane_rcparameters = {
-    'font.size': 8,
-    'axes.titlesize': 8,
-    'axes.labelsize': 8,
-    'xtick.labelsize': 8,
-    'ytick.labelsize': 8,
-    'legend.fontsize': 8,
-    'figure.figsize': (4, 3),
-}
-
-#------------------------------------------------------------------------------
-# Run code and capture figures
-#------------------------------------------------------------------------------
-
-import matplotlib
-import matplotlib.cbook as cbook
-matplotlib.use('Agg')
-import matplotlib.pyplot as plt
-import matplotlib.image as image
-from matplotlib import _pylab_helpers
-
-def contains_doctest(text):
-    r = re.compile(r'^\s*>>>', re.M)
-    m = r.match(text)
-    return bool(m)
-
-def unescape_doctest(text):
-    """
-    Extract code from a piece of text, which contains either Python code
-    or doctests.
-
-    """
-    if not contains_doctest(text):
-        return text
-
-    code = ""
-    for line in text.split("\n"):
-        m = re.match(r'^\s*(>>>|...) (.*)$', line)
-        if m:
-            code += m.group(2) + "\n"
-        elif line.strip():
-            code += "# " + line.strip() + "\n"
-        else:
-            code += "\n"
-    return code
-
-def run_code(code, code_path):
-    # Change the working directory to the directory of the example, so
-    # it can get at its data files, if any.
-    pwd = os.getcwd()
-    if code_path is not None:
-        os.chdir(os.path.dirname(code_path))
-    stdout = sys.stdout
-    sys.stdout = cStringIO.StringIO()
-    try:
-        code = unescape_doctest(code)
-        ns = {}
-        exec setup.config.plot_pre_code in ns
-        exec code in ns
-    finally:
-        os.chdir(pwd)
-        sys.stdout = stdout
-    return ns
-
-#------------------------------------------------------------------------------
-# Generating figures
-#------------------------------------------------------------------------------
-
-def out_of_date(original, derived):
-    """
-    Returns True if derivative is out-of-date wrt original,
-    both of which are full file paths.
-    """
-    return (not os.path.exists(derived)
-            or os.stat(derived).st_mtime < os.stat(original).st_mtime)
-
-def makefig(code, code_path, output_dir, output_base, config):
-    """
-    run a pyplot script and save the low and high res PNGs and a PDF in _static
-
-    """
-
-    formats = [('png', 100),
-               ('hires.png', 200),
-               ('pdf', 50),
-               ]
-
-    all_exists = True
-
-    # Look for single-figure output files first
-    for format, dpi in formats:
-        output_path = os.path.join(output_dir, '%s.%s' % (output_base, format))
-        if out_of_date(code_path, output_path):
-            all_exists = False
-            break
-
-    if all_exists:
-        return 1
-
-    # Then look for multi-figure output files, assuming
-    # if we have some we have all...
-    i = 0
-    while True:
-        all_exists = True
-        for format, dpi in formats:
-            output_path = os.path.join(output_dir,
-                                       '%s_%02d.%s' % (output_base, i, format))
-            if out_of_date(code_path, output_path):
-                all_exists = False
-                break
-        if all_exists:
-            i += 1
-        else:
-            break
-
-    if i != 0:
-        return i
-
-    # We didn't find the files, so build them
-    print "-- Plotting figures %s" % output_base
-
-    # Clear between runs
-    plt.close('all')
-
-    # Reset figure parameters
-    matplotlib.rcdefaults()
-    matplotlib.rcParams.update(config.plot_rcparams)
-
-    try:
-        run_code(code, code_path)
-    except:
-        raise
-	s = cbook.exception_to_str("Exception running plot %s" % code_path)
-        warnings.warn(s)
-        return 0
-
-    fig_managers = _pylab_helpers.Gcf.get_all_fig_managers()
-    for i, figman in enumerate(fig_managers):
-        for format, dpi in formats:
-            if len(fig_managers) == 1:
-                name = output_base
-            else:
-                name = "%s_%02d" % (output_base, i)
-            path = os.path.join(output_dir, '%s.%s' % (name, format))
-            try:
-                figman.canvas.figure.savefig(path, dpi=dpi)
-            except:
-                s = cbook.exception_to_str("Exception running plot %s"
-                                           % code_path)
-                warnings.warn(s)
-                return 0
-
-    return len(fig_managers)
-
-#------------------------------------------------------------------------------
-# Generating output
-#------------------------------------------------------------------------------
-
-from docutils import nodes, utils
-import jinja
-
-TEMPLATE = """
-{{source_code}}
-
-.. htmlonly::
-
-   {% if source_code %}
-       (`Source code <{{source_link}}>`__)
-   {% endif %}
-
-   .. admonition:: Output
-      :class: plot-output
-
-      {% for name in image_names %}
-      .. figure:: {{link_dir}}/{{name}}.png
-         {%- for option in options %}
-         {{option}}
-         {% endfor %}
-
-         (
-         {%- if not source_code %}`Source code <{{source_link}}>`__, {% endif -%}
-         `PNG <{{link_dir}}/{{name}}.hires.png>`__,
-         `PDF <{{link_dir}}/{{name}}.pdf>`__)
-      {% endfor %}
-
-.. latexonly::
-
-   {% for name in image_names %}
-   .. image:: {{link_dir}}/{{name}}.pdf
-   {% endfor %}
-
-"""
-
-def run(arguments, content, options, state_machine, state, lineno):
-    if arguments and content:
-        raise RuntimeError("plot:: directive can't have both args and content")
-
-    document = state_machine.document
-    config = document.settings.env.config
-
-    options.setdefault('include-source', config.plot_include_source)
-    if options['include-source'] is None:
-        options['include-source'] = config.plot_include_source
-
-    # determine input
-    rst_file = document.attributes['source']
-    rst_dir = os.path.dirname(rst_file)
-    
-    if arguments:
-        file_name = os.path.join(rst_dir, directives.uri(arguments[0]))
-        code = open(file_name, 'r').read()
-        output_base = os.path.basename(file_name)
-    else:
-        file_name = rst_file
-        code = textwrap.dedent("\n".join(map(str, content)))
-        counter = document.attributes.get('_plot_counter', 0) + 1
-        document.attributes['_plot_counter'] = counter
-        output_base = '%d-%s' % (counter, os.path.basename(file_name))
-
-    rel_name = relative_path(file_name, setup.confdir)
-
-    base, ext = os.path.splitext(output_base)
-    if ext in ('.py', '.rst', '.txt'):
-        output_base = base
-
-    # is it in doctest format?
-    is_doctest = contains_doctest(code)
-    if options.has_key('format'):
-        if options['format'] == 'python':
-            is_doctest = False
-        else:
-            is_doctest = True
-
-    # determine output
-    file_rel_dir = os.path.dirname(rel_name)
-    while file_rel_dir.startswith(os.path.sep):
-        file_rel_dir = file_rel_dir[1:]
-
-    output_dir = os.path.join(setup.confdir, setup.config.plot_output_dir,
-                              file_rel_dir)
-
-    if not os.path.exists(output_dir):
-        cbook.mkdirs(output_dir)
-
-    # copy script
-    target_name = os.path.join(output_dir, output_base)
-    f = open(target_name, 'w')
-    f.write(unescape_doctest(code))
-    f.close()
-
-    source_link = relative_path(target_name, rst_dir)
-
-    # determine relative reference
-    link_dir = relative_path(output_dir, rst_dir)
-
-    # make figures
-    num_figs = makefig(code, file_name, output_dir, output_base, config)
-
-    # generate output
-    if options['include-source']:
-        if is_doctest:
-            lines = ['']
-        else:
-            lines = ['.. code-block:: python', '']
-        lines += ['    %s' % row.rstrip() for row in code.split('\n')]
-        source_code = "\n".join(lines)
-    else:
-        source_code = ""
-
-    if num_figs > 0:
-        image_names = []
-        for i in range(num_figs):
-            if num_figs == 1:
-                image_names.append(output_base)
-            else:
-                image_names.append("%s_%02d" % (output_base, i))
-    else:
-        reporter = state.memo.reporter
-        sm = reporter.system_message(3, "Exception occurred rendering plot",
-                                     line=lineno)
-        return [sm]
-
-
-    opts = [':%s: %s' % (key, val) for key, val in options.items()
-            if key in ('alt', 'height', 'width', 'scale', 'align', 'class')]
-
-    result = jinja.from_string(TEMPLATE).render(
-        link_dir=link_dir.replace(os.path.sep, '/'),
-        source_link=source_link,
-        options=opts,
-        image_names=image_names,
-        source_code=source_code)
-
-    lines = result.split("\n")
-    if len(lines):
-        state_machine.insert_input(
-            lines, state_machine.input_lines.source(0))
-    return []
-
-
-def relative_path(target, base):
-    target = os.path.abspath(os.path.normpath(target))
-    base = os.path.abspath(os.path.normpath(base))
-
-    target_parts = target.split(os.path.sep)
-    base_parts = base.split(os.path.sep)
-    rel_parts = 0
-
-    while target_parts and base_parts and target_parts[0] == base_parts[0]:
-        target_parts.pop(0)
-        base_parts.pop(0)
-
-    rel_parts += len(base_parts)
-    return os.path.sep.join([os.path.pardir] * rel_parts + target_parts)
-
-#------------------------------------------------------------------------------
-# plot:: directive registration etc.
-#------------------------------------------------------------------------------
-
-from docutils.parsers.rst import directives
-try:
-    # docutils 0.4
-    from docutils.parsers.rst.directives.images import align
-except ImportError:
-    # docutils 0.5
-    from docutils.parsers.rst.directives.images import Image
-    align = Image.align
-
-try:
-    from docutils.parsers.rst import Directive
-except ImportError:
-    from docutils.parsers.rst.directives import _directives
-
-    def plot_directive(name, arguments, options, content, lineno,
-                       content_offset, block_text, state, state_machine):
-        return run(arguments, content, options, state_machine, state, lineno)
-    plot_directive.__doc__ = __doc__
-else:
-    class plot_directive(Directive):
-        def run(self):
-            return run(self.arguments, self.content, self.options,
-                       self.state_machine, self.state, self.lineno)
-    plot_directive.__doc__ = __doc__
-
-def _option_boolean(arg):
-    if not arg or not arg.strip():
-        return None
-    elif arg.strip().lower() in ('no', '0', 'false'):
-        return False
-    elif arg.strip().lower() in ('yes', '1', 'true'):
-        return True
-    else:
-        raise ValueError('"%s" unknown boolean' % arg)
-
-def _option_format(arg):
-    return directives.choice(arg, ('python', 'lisp'))
-
-plot_directive_options = {'alt': directives.unchanged,
-                          'height': directives.length_or_unitless,
-                          'width': directives.length_or_percentage_or_unitless,
-                          'scale': directives.nonnegative_int,
-                          'align': align,
-                          'class': directives.class_option,
-                          'include-source': _option_boolean,
-                          'format': _option_format,
-                          }

Copied: branches/dynamic_cpu_configuration/doc/sphinxext/plot_directive.py (from rev 6149, trunk/doc/sphinxext/plot_directive.py)

Copied: branches/dynamic_cpu_configuration/doc/sphinxext/tests (from rev 6149, trunk/doc/sphinxext/tests)

Deleted: branches/dynamic_cpu_configuration/doc/sphinxext/tests/test_docscrape.py
===================================================================
--- trunk/doc/sphinxext/tests/test_docscrape.py	2008-12-16 18:53:25 UTC (rev 6149)
+++ branches/dynamic_cpu_configuration/doc/sphinxext/tests/test_docscrape.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -1,490 +0,0 @@
-# -*- encoding:utf-8 -*-
-
-import sys, os
-sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
-
-from docscrape import NumpyDocString, FunctionDoc
-from docscrape_sphinx import SphinxDocString
-from nose.tools import *
-
-doc_txt = '''\
-  numpy.multivariate_normal(mean, cov, shape=None)
-
-  Draw values from a multivariate normal distribution with specified
-  mean and covariance.
-
-  The multivariate normal or Gaussian distribution is a generalisation
-  of the one-dimensional normal distribution to higher dimensions.
-
-  Parameters
-  ----------
-  mean : (N,) ndarray
-      Mean of the N-dimensional distribution.
-
-      .. math::
-
-         (1+2+3)/3
-
-  cov : (N,N) ndarray
-      Covariance matrix of the distribution.
-  shape : tuple of ints
-      Given a shape of, for example, (m,n,k), m*n*k samples are
-      generated, and packed in an m-by-n-by-k arrangement.  Because
-      each sample is N-dimensional, the output shape is (m,n,k,N).
-
-  Returns
-  -------
-  out : ndarray
-      The drawn samples, arranged according to `shape`.  If the
-      shape given is (m,n,...), then the shape of `out` is is
-      (m,n,...,N).
-
-      In other words, each entry ``out[i,j,...,:]`` is an N-dimensional
-      value drawn from the distribution.
-
-  Warnings
-  --------
-  Certain warnings apply.
-
-  Notes
-  -----
-
-  Instead of specifying the full covariance matrix, popular
-  approximations include:
-
-    - Spherical covariance (`cov` is a multiple of the identity matrix)
-    - Diagonal covariance (`cov` has non-negative elements only on the diagonal)
-
-  This geometrical property can be seen in two dimensions by plotting
-  generated data-points:
-
-  >>> mean = [0,0]
-  >>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis
-
-  >>> x,y = multivariate_normal(mean,cov,5000).T
-  >>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show()
-
-  Note that the covariance matrix must be symmetric and non-negative
-  definite.
-
-  References
-  ----------
-  .. [1] A. Papoulis, "Probability, Random Variables, and Stochastic
-         Processes," 3rd ed., McGraw-Hill Companies, 1991
-  .. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification,"
-         2nd ed., Wiley, 2001.
-
-  See Also
-  --------
-  some, other, funcs
-  otherfunc : relationship
-
-  Examples
-  --------
-  >>> mean = (1,2)
-  >>> cov = [[1,0],[1,0]]
-  >>> x = multivariate_normal(mean,cov,(3,3))
-  >>> print x.shape
-  (3, 3, 2)
-
-  The following is probably true, given that 0.6 is roughly twice the
-  standard deviation:
-
-  >>> print list( (x[0,0,:] - mean) < 0.6 )
-  [True, True]
-
-  .. index:: random
-     :refguide: random;distributions, random;gauss
-
-  '''
-doc = NumpyDocString(doc_txt)
-
-
-def test_signature():
-    assert doc['Signature'].startswith('numpy.multivariate_normal(')
-    assert doc['Signature'].endswith('shape=None)')
-
-def test_summary():
-    assert doc['Summary'][0].startswith('Draw values')
-    assert doc['Summary'][-1].endswith('covariance.')
-
-def test_extended_summary():
-    assert doc['Extended Summary'][0].startswith('The multivariate normal')
-
-def test_parameters():
-    assert_equal(len(doc['Parameters']), 3)
-    assert_equal([n for n,_,_ in doc['Parameters']], ['mean','cov','shape'])
-
-    arg, arg_type, desc = doc['Parameters'][1]
-    assert_equal(arg_type, '(N,N) ndarray')
-    assert desc[0].startswith('Covariance matrix')
-    assert doc['Parameters'][0][-1][-2] == '   (1+2+3)/3'
-
-def test_returns():
-    assert_equal(len(doc['Returns']), 1)
-    arg, arg_type, desc = doc['Returns'][0]
-    assert_equal(arg, 'out')
-    assert_equal(arg_type, 'ndarray')
-    assert desc[0].startswith('The drawn samples')
-    assert desc[-1].endswith('distribution.')
-
-def test_notes():
-    assert doc['Notes'][0].startswith('Instead')
-    assert doc['Notes'][-1].endswith('definite.')
-    assert_equal(len(doc['Notes']), 17)
-
-def test_references():
-    assert doc['References'][0].startswith('..')
-    assert doc['References'][-1].endswith('2001.')
-
-def test_examples():
-    assert doc['Examples'][0].startswith('>>>')
-    assert doc['Examples'][-1].endswith('True]')
-
-def test_index():
-    assert_equal(doc['index']['default'], 'random')
-    print doc['index']
-    assert_equal(len(doc['index']), 2)
-    assert_equal(len(doc['index']['refguide']), 2)
-
-def non_blank_line_by_line_compare(a,b):
-    a = [l for l in a.split('\n') if l.strip()]
-    b = [l for l in b.split('\n') if l.strip()]
-    for n,line in enumerate(a):
-        if not line == b[n]:
-            raise AssertionError("Lines %s of a and b differ: "
-                                 "\n>>> %s\n<<< %s\n" %
-                                 (n,line,b[n]))
-def test_str():
-    non_blank_line_by_line_compare(str(doc),
-"""numpy.multivariate_normal(mean, cov, shape=None)
-
-Draw values from a multivariate normal distribution with specified
-mean and covariance.
-
-The multivariate normal or Gaussian distribution is a generalisation
-of the one-dimensional normal distribution to higher dimensions.
-
-Parameters
-----------
-mean : (N,) ndarray
-    Mean of the N-dimensional distribution.
-
-    .. math::
-
-       (1+2+3)/3
-
-cov : (N,N) ndarray
-    Covariance matrix of the distribution.
-shape : tuple of ints
-    Given a shape of, for example, (m,n,k), m*n*k samples are
-    generated, and packed in an m-by-n-by-k arrangement.  Because
-    each sample is N-dimensional, the output shape is (m,n,k,N).
-
-Returns
--------
-out : ndarray
-    The drawn samples, arranged according to `shape`.  If the
-    shape given is (m,n,...), then the shape of `out` is is
-    (m,n,...,N).
-
-    In other words, each entry ``out[i,j,...,:]`` is an N-dimensional
-    value drawn from the distribution.
-
-Warnings
---------
-Certain warnings apply.
-
-See Also
---------
-`some`_, `other`_, `funcs`_
-
-`otherfunc`_
-    relationship
-
-Notes
------
-Instead of specifying the full covariance matrix, popular
-approximations include:
-
-  - Spherical covariance (`cov` is a multiple of the identity matrix)
-  - Diagonal covariance (`cov` has non-negative elements only on the diagonal)
-
-This geometrical property can be seen in two dimensions by plotting
-generated data-points:
-
->>> mean = [0,0]
->>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis
-
->>> x,y = multivariate_normal(mean,cov,5000).T
->>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show()
-
-Note that the covariance matrix must be symmetric and non-negative
-definite.
-
-References
-----------
-.. [1] A. Papoulis, "Probability, Random Variables, and Stochastic
-       Processes," 3rd ed., McGraw-Hill Companies, 1991
-.. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification,"
-       2nd ed., Wiley, 2001.
-
-Examples
---------
->>> mean = (1,2)
->>> cov = [[1,0],[1,0]]
->>> x = multivariate_normal(mean,cov,(3,3))
->>> print x.shape
-(3, 3, 2)
-
-The following is probably true, given that 0.6 is roughly twice the
-standard deviation:
-
->>> print list( (x[0,0,:] - mean) < 0.6 )
-[True, True]
-
-.. index:: random
-   :refguide: random;distributions, random;gauss""")
-
-
-def test_sphinx_str():
-    sphinx_doc = SphinxDocString(doc_txt)
-    non_blank_line_by_line_compare(str(sphinx_doc),
-"""
-.. index:: random
-   single: random;distributions, random;gauss
-
-Draw values from a multivariate normal distribution with specified
-mean and covariance.
-
-The multivariate normal or Gaussian distribution is a generalisation
-of the one-dimensional normal distribution to higher dimensions.
-
-:Parameters:
-
-    **mean** : (N,) ndarray
-
-        Mean of the N-dimensional distribution.
-
-        .. math::
-
-           (1+2+3)/3
-
-    **cov** : (N,N) ndarray
-
-        Covariance matrix of the distribution.
-
-    **shape** : tuple of ints
-
-        Given a shape of, for example, (m,n,k), m*n*k samples are
-        generated, and packed in an m-by-n-by-k arrangement.  Because
-        each sample is N-dimensional, the output shape is (m,n,k,N).
-
-:Returns:
-
-    **out** : ndarray
-
-        The drawn samples, arranged according to `shape`.  If the
-        shape given is (m,n,...), then the shape of `out` is is
-        (m,n,...,N).
-        
-        In other words, each entry ``out[i,j,...,:]`` is an N-dimensional
-        value drawn from the distribution.
-
-.. warning::
-
-    Certain warnings apply.
-
-.. seealso::
-    
-    :obj:`some`, :obj:`other`, :obj:`funcs`
-    
-    :obj:`otherfunc`
-        relationship
-    
-.. rubric:: Notes
-
-Instead of specifying the full covariance matrix, popular
-approximations include:
-
-  - Spherical covariance (`cov` is a multiple of the identity matrix)
-  - Diagonal covariance (`cov` has non-negative elements only on the diagonal)
-
-This geometrical property can be seen in two dimensions by plotting
-generated data-points:
-
->>> mean = [0,0]
->>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis
-
->>> x,y = multivariate_normal(mean,cov,5000).T
->>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show()
-
-Note that the covariance matrix must be symmetric and non-negative
-definite.
-
-.. rubric:: References
-
-.. [1] A. Papoulis, "Probability, Random Variables, and Stochastic
-       Processes," 3rd ed., McGraw-Hill Companies, 1991
-.. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification,"
-       2nd ed., Wiley, 2001.
-
-.. rubric:: Examples
-
->>> mean = (1,2)
->>> cov = [[1,0],[1,0]]
->>> x = multivariate_normal(mean,cov,(3,3))
->>> print x.shape
-(3, 3, 2)
-
-The following is probably true, given that 0.6 is roughly twice the
-standard deviation:
-
->>> print list( (x[0,0,:] - mean) < 0.6 )
-[True, True]
-""")
-
-       
-doc2 = NumpyDocString("""
-    Returns array of indices of the maximum values of along the given axis.
-
-    Parameters
-    ----------
-    a : {array_like}
-        Array to look in.
-    axis : {None, integer}
-        If None, the index is into the flattened array, otherwise along
-        the specified axis""")
-
-def test_parameters_without_extended_description():
-    assert_equal(len(doc2['Parameters']), 2)
-
-doc3 = NumpyDocString("""
-    my_signature(*params, **kwds)
-
-    Return this and that.
-    """)
-
-def test_escape_stars():
-    signature = str(doc3).split('\n')[0]
-    assert_equal(signature, 'my_signature(\*params, \*\*kwds)')
-
-doc4 = NumpyDocString(
-    """a.conj()
-
-    Return an array with all complex-valued elements conjugated.""")
-
-def test_empty_extended_summary():
-    assert_equal(doc4['Extended Summary'], [])
-
-doc5 = NumpyDocString(
-    """
-    a.something()
-
-    Raises
-    ------
-    LinAlgException
-        If array is singular.
-
-    """)
-
-def test_raises():
-    assert_equal(len(doc5['Raises']), 1)
-    name,_,desc = doc5['Raises'][0]
-    assert_equal(name,'LinAlgException')
-    assert_equal(desc,['If array is singular.'])
-
-def test_see_also():
-    doc6 = NumpyDocString(
-    """
-    z(x,theta)
-
-    See Also
-    --------
-    func_a, func_b, func_c
-    func_d : some equivalent func
-    foo.func_e : some other func over
-             multiple lines
-    func_f, func_g, :meth:`func_h`, func_j,
-    func_k
-    :obj:`baz.obj_q`
-    :class:`class_j`: fubar
-        foobar
-    """)
-
-    assert len(doc6['See Also']) == 12
-    for func, desc, role in doc6['See Also']:
-        if func in ('func_a', 'func_b', 'func_c', 'func_f',
-                    'func_g', 'func_h', 'func_j', 'func_k', 'baz.obj_q'):
-            assert(not desc)
-        else:
-            assert(desc)
-
-        if func == 'func_h':
-            assert role == 'meth'
-        elif func == 'baz.obj_q':
-            assert role == 'obj'
-        elif func == 'class_j':
-            assert role == 'class'
-        else:
-            assert role is None
-
-        if func == 'func_d':
-            assert desc == ['some equivalent func']
-        elif func == 'foo.func_e':
-            assert desc == ['some other func over', 'multiple lines']
-        elif func == 'class_j':
-            assert desc == ['fubar', 'foobar']
-
-def test_see_also_print():
-    class Dummy(object):
-        """
-        See Also
-        --------
-        func_a, func_b
-        func_c : some relationship
-                 goes here
-        func_d
-        """
-        pass
-
-    obj = Dummy()
-    s = str(FunctionDoc(obj, role='func'))
-    assert(':func:`func_a`, :func:`func_b`' in s)
-    assert('    some relationship' in s)
-    assert(':func:`func_d`' in s)
-
-doc7 = NumpyDocString("""
-
-        Doc starts on second line.
-
-        """)
-
-def test_empty_first_line():
-    assert doc7['Summary'][0].startswith('Doc starts')
-
-
-def test_no_summary():
-    str(SphinxDocString("""
-    Parameters
-    ----------"""))
-
-
-def test_unicode():
-    doc = SphinxDocString("""
-    öäöäöäöäöåååå
-
-    öäöäöäööäååå
-
-    Parameters
-    ----------
-    ååå : äää
-        ööö
-
-    Returns
-    -------
-    ååå : ööö
-        äää
-
-    """)
-    assert doc['Summary'][0] == u'öäöäöäöäöåååå'.encode('utf-8')

Copied: branches/dynamic_cpu_configuration/doc/sphinxext/tests/test_docscrape.py (from rev 6149, trunk/doc/sphinxext/tests/test_docscrape.py)

Deleted: branches/dynamic_cpu_configuration/doc/sphinxext/traitsdoc.py
===================================================================
--- trunk/doc/sphinxext/traitsdoc.py	2008-12-16 18:53:25 UTC (rev 6149)
+++ branches/dynamic_cpu_configuration/doc/sphinxext/traitsdoc.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -1,140 +0,0 @@
-"""
-=========
-traitsdoc
-=========
-
-Sphinx extension that handles docstrings in the Numpy standard format, [1]
-and support Traits [2].
-
-This extension can be used as a replacement for ``numpydoc`` when support
-for Traits is required.
-
-.. [1] http://projects.scipy.org/scipy/numpy/wiki/CodingStyleGuidelines#docstring-standard
-.. [2] http://code.enthought.com/projects/traits/
-
-"""
-
-import inspect
-import os
-import pydoc
-
-import docscrape
-import docscrape_sphinx
-from docscrape_sphinx import SphinxClassDoc, SphinxFunctionDoc, SphinxDocString
-
-import numpydoc
-
-import comment_eater
-
-class SphinxTraitsDoc(SphinxClassDoc):
-    def __init__(self, cls, modulename='', func_doc=SphinxFunctionDoc):
-        if not inspect.isclass(cls):
-            raise ValueError("Initialise using a class. Got %r" % cls)
-        self._cls = cls
-
-        if modulename and not modulename.endswith('.'):
-            modulename += '.'
-        self._mod = modulename
-        self._name = cls.__name__
-        self._func_doc = func_doc
-
-        docstring = pydoc.getdoc(cls)
-        docstring = docstring.split('\n')
-
-        # De-indent paragraph
-        try:
-            indent = min(len(s) - len(s.lstrip()) for s in docstring
-                         if s.strip())
-        except ValueError:
-            indent = 0
-
-        for n,line in enumerate(docstring):
-            docstring[n] = docstring[n][indent:]
-
-        self._doc = docscrape.Reader(docstring)
-        self._parsed_data = {
-            'Signature': '',
-            'Summary': '',
-            'Description': [],
-            'Extended Summary': [],
-            'Parameters': [],
-            'Returns': [],
-            'Raises': [],
-            'Warns': [],
-            'Other Parameters': [],
-            'Traits': [],
-            'Methods': [],
-            'See Also': [],
-            'Notes': [],
-            'References': '',
-            'Example': '',
-            'Examples': '',
-            'index': {}
-            }
-
-        self._parse()
-
-    def _str_summary(self):
-        return self['Summary'] + ['']
-
-    def _str_extended_summary(self):
-        return self['Description'] + self['Extended Summary'] + ['']
-
-    def __str__(self, indent=0, func_role="func"):
-        out = []
-        out += self._str_signature()
-        out += self._str_index() + ['']
-        out += self._str_summary()
-        out += self._str_extended_summary()
-        for param_list in ('Parameters', 'Traits', 'Methods',
-                           'Returns','Raises'):
-            out += self._str_param_list(param_list)
-        out += self._str_see_also("obj")
-        out += self._str_section('Notes')
-        out += self._str_references()
-        out += self._str_section('Example')
-        out += self._str_section('Examples')
-        out = self._str_indent(out,indent)
-        return '\n'.join(out)
-
-def looks_like_issubclass(obj, classname):
-    """ Return True if the object has a class or superclass with the given class
-    name.
-
-    Ignores old-style classes.
-    """
-    t = obj
-    if t.__name__ == classname:
-        return True
-    for klass in t.__mro__:
-        if klass.__name__ == classname:
-            return True
-    return False
-
-def get_doc_object(obj, what=None):
-    if what is None:
-        if inspect.isclass(obj):
-            what = 'class'
-        elif inspect.ismodule(obj):
-            what = 'module'
-        elif callable(obj):
-            what = 'function'
-        else:
-            what = 'object'
-    if what == 'class':
-        doc = SphinxTraitsDoc(obj, '', func_doc=SphinxFunctionDoc)
-        if looks_like_issubclass(obj, 'HasTraits'):
-            for name, trait, comment in comment_eater.get_class_traits(obj):
-                # Exclude private traits.
-                if not name.startswith('_'):
-                    doc['Traits'].append((name, trait, comment.splitlines()))
-        return doc
-    elif what in ('function', 'method'):
-        return SphinxFunctionDoc(obj, '')
-    else:
-        return SphinxDocString(pydoc.getdoc(obj))
-
-def setup(app):
-    # init numpydoc
-    numpydoc.setup(app, get_doc_object)
-

Copied: branches/dynamic_cpu_configuration/doc/sphinxext/traitsdoc.py (from rev 6149, trunk/doc/sphinxext/traitsdoc.py)

Modified: branches/dynamic_cpu_configuration/doc/summarize.py
===================================================================
--- branches/dynamic_cpu_configuration/doc/summarize.py	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/doc/summarize.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -7,10 +7,10 @@
 """
 
 import os, glob, re, sys, inspect, optparse
-sys.path.append(os.path.join(os.path.dirname(__file__), 'ext'))
-from ext.phantom_import import import_phantom_module
+sys.path.append(os.path.join(os.path.dirname(__file__), 'sphinxext'))
+from sphinxext.phantom_import import import_phantom_module
 
-from ext.autosummary_generate import get_documented
+from sphinxext.autosummary_generate import get_documented
 
 CUR_DIR = os.path.dirname(__file__)
 SOURCE_DIR = os.path.join(CUR_DIR, 'source', 'reference')
@@ -56,6 +56,8 @@
 
 def main():
     p = optparse.OptionParser(__doc__)
+    p.add_option("-c", "--columns", action="store", type="int", dest="cols",
+                 default=3, help="Maximum number of columns")
     options, args = p.parse_args()
 
     if len(args) != 0:
@@ -84,13 +86,13 @@
             print "--- %s\n" % filename
         last_filename = filename
         print " ** ", section
-        print format_in_columns(sorted(names))
+        print format_in_columns(sorted(names), options.cols)
         print "\n"
 
     print ""
     print "Undocumented"
     print "============\n"
-    print format_in_columns(sorted(undocumented.keys()))
+    print format_in_columns(sorted(undocumented.keys()), options.cols)
 
 def check_numpy():
     documented = get_documented(glob.glob(SOURCE_DIR + '/*.rst'))
@@ -141,7 +143,7 @@
     
     return undocumented
 
-def format_in_columns(lst):
+def format_in_columns(lst, max_columns):
     """
     Format a list containing strings to a string containing the items
     in columns.
@@ -149,7 +151,9 @@
     lst = map(str, lst)
     col_len = max(map(len, lst)) + 2
     ncols = 80//col_len
-    if ncols == 0:
+    if ncols > max_columns:
+        ncols = max_columns
+    if ncols <= 0:
         ncols = 1
 
     if len(lst) % ncols == 0:

Modified: branches/dynamic_cpu_configuration/numpy/core/SConscript
===================================================================
--- branches/dynamic_cpu_configuration/numpy/core/SConscript	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/numpy/core/SConscript	2008-12-22 13:23:03 UTC (rev 6188)
@@ -1,4 +1,4 @@
-# Last Change: Fri Oct 03 04:00 PM 2008 J
+# Last Change: Sat Nov 29 01:00 AM 2008 J
 # vim:syntax=python
 import os
 import sys
@@ -137,9 +137,9 @@
 mfuncs_defined = dict([(f, 0) for f in mfuncs])
 
 # Check for mandatory funcs: we barf if a single one of those is not there
-    mandatory_funcs = ["sin", "cos", "tan", "sinh", "cosh", "tanh", "fabs",
-		"floor", "ceil", "sqrt", "log10", "log", "exp", "asin",
-		"acos", "atan", "fmod", 'modf', 'frexp', 'ldexp']
+mandatory_funcs = ["sin", "cos", "tan", "sinh", "cosh", "tanh", "fabs",
+		           "floor", "ceil", "sqrt", "log10", "log", "exp", "asin",
+		           "acos", "atan", "fmod", 'modf', 'frexp', 'ldexp']
 
 if not config.CheckFuncsAtOnce(mandatory_funcs):
     raise SystemError("One of the required function to build numpy is not"
@@ -159,17 +159,17 @@
 
 # XXX: we do not test for hypot because python checks for it (HAVE_HYPOT in
 # python.h... I wish they would clean their public headers someday)
-    optional_stdfuncs = ["expm1", "log1p", "acosh", "asinh", "atanh",
-                         "rint", "trunc", "exp2", "log2"]
+optional_stdfuncs = ["expm1", "log1p", "acosh", "asinh", "atanh",
+                     "rint", "trunc", "exp2", "log2"]
 
 check_funcs(optional_stdfuncs)
 
 # C99 functions: float and long double versions
-    c99_funcs = ["sin", "cos", "tan", "sinh", "cosh", "tanh", "fabs", "floor",
-                 "ceil", "rint", "trunc", "sqrt", "log10", "log", "log1p", "exp",
-                 "expm1", "asin", "acos", "atan", "asinh", "acosh", "atanh",
-                 "hypot", "atan2", "pow", "fmod", "modf", 'frexp', 'ldexp',
-                 "exp2", "log2"]
+c99_funcs = ["sin", "cos", "tan", "sinh", "cosh", "tanh", "fabs", "floor",
+             "ceil", "rint", "trunc", "sqrt", "log10", "log", "log1p", "exp",
+             "expm1", "asin", "acos", "atan", "asinh", "acosh", "atanh",
+             "hypot", "atan2", "pow", "fmod", "modf", 'frexp', 'ldexp',
+             "exp2", "log2"]
 
 for prec in ['l', 'f']:
     fns = [f + prec for f in c99_funcs]

Deleted: branches/dynamic_cpu_configuration/numpy/core/code_generators/docstrings.py
===================================================================
--- branches/dynamic_cpu_configuration/numpy/core/code_generators/docstrings.py	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/numpy/core/code_generators/docstrings.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -1,2540 +0,0 @@
-# Docstrings for generated ufuncs
-
-docdict = {}
-
-def get(name):
-    return docdict.get(name)
-
-def add_newdoc(place, name, doc):
-    docdict['.'.join((place, name))] = doc
-
-
-add_newdoc('numpy.core.umath', 'absolute',
-    """
-    Calculate the absolute value element-wise.
-
-    Parameters
-    ----------
-    x : array_like
-        Input array.
-
-    Returns
-    -------
-    res : ndarray
-        An ndarray containing the absolute value of
-        each element in `x`.  For complex input, ``a + ib``, the
-        absolute value is :math:`\\sqrt{ a^2 + b^2 }`.
-
-    Examples
-    --------
-    >>> x = np.array([-1.2, 1.2])
-    >>> np.absolute(x)
-    array([ 1.2,  1.2])
-    >>> np.absolute(1.2 + 1j)
-    1.5620499351813308
-
-    Plot the function over ``[-10, 10]``:
-
-    >>> import matplotlib.pyplot as plt
-
-    >>> x = np.linspace(-10, 10, 101)
-    >>> plt.plot(x, np.absolute(x))
-    >>> plt.show()
-
-    Plot the function over the complex plane:
-
-    >>> xx = x + 1j * x[:, np.newaxis]
-    >>> plt.imshow(np.abs(xx), extent=[-10, 10, -10, 10])
-    >>> plt.show()
-
-    """)
-
-add_newdoc('numpy.core.umath', 'add',
-    """
-    Add arguments element-wise.
-
-    Parameters
-    ----------
-    x1, x2 : array_like
-        The arrays to be added.
-
-    Returns
-    -------
-    y : {ndarray, scalar}
-        The sum of `x1` and `x2`, element-wise.  Returns scalar if
-        both  `x1` and `x2` are scalars.
-
-    Notes
-    -----
-    Equivalent to `x1` + `x2` in terms of array broadcasting.
-
-    Examples
-    --------
-    >>> np.add(1.0, 4.0)
-    5.0
-    >>> x1 = np.arange(9.0).reshape((3, 3))
-    >>> x2 = np.arange(3.0)
-    >>> np.add(x1, x2)
-    array([[  0.,   2.,   4.],
-           [  3.,   5.,   7.],
-           [  6.,   8.,  10.]])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'arccos',
-    """
-    Trigonometric inverse cosine, element-wise.
-
-    The inverse of `cos` so that, if ``y = cos(x)``, then ``x = arccos(y)``.
-
-    Parameters
-    ----------
-    x : array_like
-        `x`-coordinate on the unit circle.
-        For real arguments, the domain is [-1, 1].
-
-    Returns
-    -------
-    angle : ndarray
-        The angle of the ray intersecting the unit circle at the given
-        `x`-coordinate in radians [0, pi]. If `x` is a scalar then a
-        scalar is returned, otherwise an array of the same shape as `x`
-        is returned.
-
-    See Also
-    --------
-    cos, arctan, arcsin
-
-    Notes
-    -----
-    `arccos` is a multivalued function: for each `x` there are infinitely
-    many numbers `z` such that `cos(z) = x`. The convention is to return the
-    angle `z` whose real part lies in `[0, pi]`.
-
-    For real-valued input data types, `arccos` always returns real output.
-    For each value that cannot be expressed as a real number or infinity, it
-    yields ``nan`` and sets the `invalid` floating point error flag.
-
-    For complex-valued input, `arccos` is a complex analytical function that
-    has branch cuts `[-inf, -1]` and `[1, inf]` and is continuous from above
-    on the former and from below on the latter.
-
-    The inverse `cos` is also known as `acos` or cos^-1.
-
-    References
-    ----------
-    .. [1] M. Abramowitz and I.A. Stegun, "Handbook of Mathematical Functions",
-           10th printing, 1964, pp. 79. http://www.math.sfu.ca/~cbm/aands/
-    .. [2] Wikipedia, "Inverse trigonometric function",
-           http://en.wikipedia.org/wiki/Arccos
-
-    Examples
-    --------
-    We expect the arccos of 1 to be 0, and of -1 to be pi:
-
-    >>> np.arccos([1, -1])
-    array([ 0.        ,  3.14159265])
-
-    Plot arccos:
-
-    >>> import matplotlib.pyplot as plt
-    >>> x = np.linspace(-1, 1, num=100)
-    >>> plt.plot(x, np.arccos(x))
-    >>> plt.axis('tight')
-    >>> plt.show()
-
-    """)
-
-add_newdoc('numpy.core.umath', 'arccosh',
-    """
-    Inverse hyperbolic cosine, elementwise.
-
-    Parameters
-    ----------
-    x : array_like
-        Input array.
-
-    Returns
-    -------
-    out : ndarray
-        Array of the same shape and dtype as `x`.
-
-    Notes
-    -----
-    `arccosh` is a multivalued function: for each `x` there are infinitely
-    many numbers `z` such that `cosh(z) = x`. The convention is to return the
-    `z` whose imaginary part lies in `[-pi, pi]` and the real part in
-    ``[0, inf]``.
-
-    For real-valued input data types, `arccosh` always returns real output.
-    For each value that cannot be expressed as a real number or infinity, it
-    yields ``nan`` and sets the `invalid` floating point error flag.
-
-    For complex-valued input, `arccosh` is a complex analytical function that
-    has a branch cut `[-inf, 1]` and is continuous from above on it.
-
-    References
-    ----------
-    .. [1] M. Abramowitz and I.A. Stegun, "Handbook of Mathematical Functions",
-           10th printing, 1964, pp. 86. http://www.math.sfu.ca/~cbm/aands/
-    .. [2] Wikipedia, "Inverse hyperbolic function",
-           http://en.wikipedia.org/wiki/Arccosh
-
-    Examples
-    --------
-    >>> np.arccosh([np.e, 10.0])
-    array([ 1.65745445,  2.99322285])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'arcsin',
-    """
-    Inverse sine elementwise.
-
-    Parameters
-    ----------
-    x : array_like
-      `y`-coordinate on the unit circle.
-
-    Returns
-    -------
-    angle : ndarray
-      The angle of the ray intersecting the unit circle at the given
-      `y`-coordinate in radians ``[-pi, pi]``. If `x` is a scalar then
-      a scalar is returned, otherwise an array is returned.
-
-    See Also
-    --------
-    sin, arctan, arctan2
-
-    Notes
-    -----
-    `arcsin` is a multivalued function: for each `x` there are infinitely
-    many numbers `z` such that `sin(z) = x`. The convention is to return the
-    angle `z` whose real part lies in `[-pi/2, pi/2]`.
-
-    For real-valued input data types, `arcsin` always returns real output.
-    For each value that cannot be expressed as a real number or infinity, it
-    yields ``nan`` and sets the `invalid` floating point error flag.
-
-    For complex-valued input, `arcsin` is a complex analytical function that
-    has branch cuts `[-inf, -1]` and `[1, inf]` and is continuous from above
-    on the former and from below on the latter.
-
-    The inverse sine is also known as `asin` or ``sin^-1``.
-
-    References
-    ----------
-    .. [1] M. Abramowitz and I.A. Stegun, "Handbook of Mathematical Functions",
-           10th printing, 1964, pp. 79. http://www.math.sfu.ca/~cbm/aands/
-    .. [2] Wikipedia, "Inverse trigonometric function",
-           http://en.wikipedia.org/wiki/Arcsin
-
-    Examples
-    --------
-    >>> np.arcsin(1)     # pi/2
-    1.5707963267948966
-    >>> np.arcsin(-1)    # -pi/2
-    -1.5707963267948966
-    >>> np.arcsin(0)
-    0.0
-
-    """)
-
-add_newdoc('numpy.core.umath', 'arcsinh',
-    """
-    Inverse hyperbolic sine elementwise.
-
-    Parameters
-    ----------
-    x : array_like
-        Input array.
-
-    Returns
-    -------
-    out : ndarray
-        Array of of the same shape as `x`.
-
-    Notes
-    -----
-    `arcsinh` is a multivalued function: for each `x` there are infinitely
-    many numbers `z` such that `sinh(z) = x`. The convention is to return the
-    `z` whose imaginary part lies in `[-pi/2, pi/2]`.
-
-    For real-valued input data types, `arcsinh` always returns real output.
-    For each value that cannot be expressed as a real number or infinity, it
-    returns ``nan`` and sets the `invalid` floating point error flag.
-
-    For complex-valued input, `arccos` is a complex analytical function that
-    has branch cuts `[1j, infj]` and `[-1j, -infj]` and is continuous from
-    the right on the former and from the left on the latter.
-
-    The inverse hyperbolic sine is also known as `asinh` or ``sinh^-1``.
-
-    References
-    ----------
-    .. [1] M. Abramowitz and I.A. Stegun, "Handbook of Mathematical Functions",
-           10th printing, 1964, pp. 86. http://www.math.sfu.ca/~cbm/aands/
-    .. [2] Wikipedia, "Inverse hyperbolic function",
-           http://en.wikipedia.org/wiki/Arcsinh
-
-    Examples
-    --------
-    >>> np.arcsinh(np.array([np.e, 10.0]))
-    array([ 1.72538256,  2.99822295])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'arctan',
-    """
-    Trigonometric inverse tangent, element-wise.
-
-    The inverse of tan, so that if ``y = tan(x)`` then
-    ``x = arctan(y)``.
-
-    Parameters
-    ----------
-    x : array_like
-        Input values.  `arctan` is applied to each element of `x`.
-
-    Returns
-    -------
-    out : ndarray
-        Out has the same shape as `x`.  Its real part is
-        in ``[-pi/2, pi/2]``. It is a scalar if `x` is a scalar.
-
-    See Also
-    --------
-    arctan2 : Calculate the arctan of y/x.
-
-    Notes
-    -----
-    `arctan` is a multivalued function: for each `x` there are infinitely
-    many numbers `z` such that `tan(z) = x`. The convention is to return the
-    angle `z` whose real part lies in `[-pi/2, pi/2]`.
-
-    For real-valued input data types, `arctan` always returns real output.
-    For each value that cannot be expressed as a real number or infinity, it
-    yields ``nan`` and sets the `invalid` floating point error flag.
-
-    For complex-valued input, `arctan` is a complex analytical function that
-    has branch cuts `[1j, infj]` and `[-1j, -infj]` and is continuous from the
-    left on the former and from the right on the latter.
-
-    The inverse tangent is also known as `atan` or ``tan^-1``.
-
-    References
-    ----------
-    .. [1] M. Abramowitz and I.A. Stegun, "Handbook of Mathematical Functions",
-           10th printing, 1964, pp. 79. http://www.math.sfu.ca/~cbm/aands/
-    .. [2] Wikipedia, "Inverse trigonometric function",
-           http://en.wikipedia.org/wiki/Arctan
-
-    Examples
-    --------
-    We expect the arctan of 0 to be 0, and of 1 to be :math:`\\pi/4`:
-
-    >>> np.arctan([0, 1])
-    array([ 0.        ,  0.78539816])
-
-    >>> np.pi/4
-    0.78539816339744828
-
-    Plot arctan:
-
-    >>> import matplotlib.pyplot as plt
-    >>> x = np.linspace(-10, 10)
-    >>> plt.plot(x, np.arctan(x))
-    >>> plt.axis('tight')
-    >>> plt.show()
-
-    """)
-
-add_newdoc('numpy.core.umath', 'arctan2',
-    """
-    Elementwise arc tangent of ``x1/x2`` choosing the quadrant correctly.
-
-    The quadrant (ie. branch) is chosen so that ``arctan2(x1, x2)``
-    is the signed angle in radians between the line segments
-    ``(0,0) - (1,0)`` and ``(0,0) - (x2,x1)``. This function is defined
-    also for `x2` = 0.
-
-    `arctan2` is not defined for complex-valued arguments.
-
-    Parameters
-    ----------
-    x1 : array_like, real-valued
-        y-coordinates.
-    x2 : array_like, real-valued
-        x-coordinates. `x2` must be broadcastable to match the shape of `x1`,
-        or vice versa.
-
-    Returns
-    -------
-    angle : ndarray
-        Array of angles in radians, in the range ``[-pi, pi]``.
-
-    See Also
-    --------
-    arctan, tan
-
-    Notes
-    -----
-    `arctan2` is identical to the `atan2` function of the underlying
-    C library. The following special values are defined in the C standard [2]:
-
-    ====== ====== ================
-    `x1`   `x2`   `arctan2(x1,x2)`
-    ====== ====== ================
-    +/- 0  +0     +/- 0
-    +/- 0  -0     +/- pi
-     > 0   +/-inf +0 / +pi
-     < 0   +/-inf -0 / -pi
-    +/-inf +inf   +/- (pi/4)
-    +/-inf -inf   +/- (3*pi/4)
-    ====== ====== ================
-
-    Note that +0 and -0 are distinct floating point numbers.
-
-    References
-    ----------
-    .. [1] Wikipedia, "atan2",
-           http://en.wikipedia.org/wiki/Atan2
-    .. [2] ISO/IEC standard 9899:1999, "Programming language C", 1999.
-
-    Examples
-    --------
-    Consider four points in different quadrants:
-
-    >>> x = np.array([-1, +1, +1, -1])
-    >>> y = np.array([-1, -1, +1, +1])
-    >>> np.arctan2(y, x) * 180 / np.pi
-    array([-135.,  -45.,   45.,  135.])
-
-    Note the order of the parameters. `arctan2` is defined also when `x2` = 0
-    and at several other special points, obtaining values in
-    the range ``[-pi, pi]``:
-
-    >>> np.arctan2([1., -1.], [0., 0.])
-    array([ 1.57079633, -1.57079633])
-    >>> np.arctan2([0., 0., np.inf], [+0., -0., np.inf])
-    array([ 0.        ,  3.14159265,  0.78539816])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'arctanh',
-    """
-    Inverse hyperbolic tangent elementwise.
-
-    Parameters
-    ----------
-    x : array_like
-        Input array.
-
-    Returns
-    -------
-    out : ndarray
-        Array of the same shape as `x`.
-
-    Notes
-    -----
-    `arctanh` is a multivalued function: for each `x` there are infinitely
-    many numbers `z` such that `tanh(z) = x`. The convention is to return the
-    `z` whose imaginary part lies in `[-pi/2, pi/2]`.
-
-    For real-valued input data types, `arctanh` always returns real output.
-    For each value that cannot be expressed as a real number or infinity, it
-    yields ``nan`` and sets the `invalid` floating point error flag.
-
-    For complex-valued input, `arctanh` is a complex analytical function that
-    has branch cuts `[-1, -inf]` and `[1, inf]` and is continuous from
-    above on the former and from below on the latter.
-
-    The inverse hyperbolic tangent is also known as `atanh` or ``tanh^-1``.
-
-    References
-    ----------
-    .. [1] M. Abramowitz and I.A. Stegun, "Handbook of Mathematical Functions",
-           10th printing, 1964, pp. 86. http://www.math.sfu.ca/~cbm/aands/
-    .. [2] Wikipedia, "Inverse hyperbolic function",
-           http://en.wikipedia.org/wiki/Arctanh
-
-    Examples
-    --------
-    >>> np.arctanh([0, -0.5])
-    array([ 0.        , -0.54930614])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'bitwise_and',
-    """
-    Compute bit-wise AND of two arrays, element-wise.
-
-    When calculating the bit-wise AND between two elements, ``x`` and ``y``,
-    each element is first converted to its binary representation (which works
-    just like the decimal system, only now we're using 2 instead of 10):
-
-    .. math:: x = \\sum_{i=0}^{W-1} a_i \\cdot 2^i\\\\
-              y = \\sum_{i=0}^{W-1} b_i \\cdot 2^i,
-
-    where ``W`` is the bit-width of the type (i.e., 8 for a byte or uint8),
-    and each :math:`a_i` and :math:`b_j` is either 0 or 1.  For example, 13
-    is represented as ``00001101``, which translates to
-    :math:`2^4 + 2^3 + 2`.
-
-    The bit-wise operator is the result of
-
-    .. math:: z = \\sum_{i=0}^{i=W-1} (a_i \\wedge b_i) \\cdot 2^i,
-
-    where :math:`\\wedge` is the AND operator, which yields one whenever
-    both :math:`a_i` and :math:`b_i` are 1.
-
-    Parameters
-    ----------
-    x1, x2 : array_like
-        Only integer types are handled (including booleans).
-
-    Returns
-    -------
-    out : array_like
-        Result.
-
-    See Also
-    --------
-    bitwise_or, bitwise_xor
-    logical_and
-    binary_repr :
-        Return the binary representation of the input number as a string.
-
-    Examples
-    --------
-    We've seen that 13 is represented by ``00001101``.  Similary, 17 is
-    represented by ``00010001``.  The bit-wise AND of 13 and 17 is
-    therefore ``000000001``, or 1:
-
-    >>> np.bitwise_and(13, 17)
-    1
-
-    >>> np.bitwise_and(14, 13)
-    12
-    >>> np.binary_repr(12)
-    '1100'
-    >>> np.bitwise_and([14,3], 13)
-    array([12,  1])
-
-    >>> np.bitwise_and([11,7], [4,25])
-    array([0, 1])
-    >>> np.bitwise_and(np.array([2,5,255]), np.array([3,14,16]))
-    array([ 2,  4, 16])
-    >>> np.bitwise_and([True, True], [False, True])
-    array([False,  True], dtype=bool)
-
-    """)
-
-add_newdoc('numpy.core.umath', 'bitwise_or',
-    """
-    Compute bit-wise OR of two arrays, element-wise.
-
-    When calculating the bit-wise OR between two elements, ``x`` and ``y``,
-    each element is first converted to its binary representation (which works
-    just like the decimal system, only now we're using 2 instead of 10):
-
-    .. math:: x = \\sum_{i=0}^{W-1} a_i \\cdot 2^i\\\\
-              y = \\sum_{i=0}^{W-1} b_i \\cdot 2^i,
-
-    where ``W`` is the bit-width of the type (i.e., 8 for a byte or uint8),
-    and each :math:`a_i` and :math:`b_j` is either 0 or 1.  For example, 13
-    is represented as ``00001101``, which translates to
-    :math:`2^4 + 2^3 + 2`.
-
-    The bit-wise operator is the result of
-
-    .. math:: z = \\sum_{i=0}^{i=W-1} (a_i \\vee b_i) \\cdot 2^i,
-
-    where :math:`\\vee` is the OR operator, which yields one whenever
-    either :math:`a_i` or :math:`b_i` is 1.
-
-    Parameters
-    ----------
-    x1, x2 : array_like
-        Only integer types are handled (including booleans).
-
-    Returns
-    -------
-    out : array_like
-        Result.
-
-    See Also
-    --------
-    bitwise_and, bitwise_xor
-    logical_or
-    binary_repr :
-        Return the binary representation of the input number as a string.
-
-    Examples
-    --------
-    We've seen that 13 is represented by ``00001101``.  Similary, 16 is
-    represented by ``00010000``.  The bit-wise OR of 13 and 16 is
-    therefore ``000111011``, or 29:
-
-    >>> np.bitwise_or(13, 16)
-    29
-    >>> np.binary_repr(29)
-    '11101'
-
-    >>> np.bitwise_or(32, 2)
-    34
-    >>> np.bitwise_or([33, 4], 1)
-    array([33,  5])
-    >>> np.bitwise_or([33, 4], [1, 2])
-    array([33,  6])
-
-    >>> np.bitwise_or(np.array([2, 5, 255]), np.array([4, 4, 4]))
-    array([  6,   5, 255])
-    >>> np.bitwise_or(np.array([2, 5, 255, 2147483647L], dtype=np.int32),
-    ...               np.array([4, 4, 4, 2147483647L], dtype=np.int32))
-    array([         6,          5,        255, 2147483647])
-    >>> np.bitwise_or([True, True], [False, True])
-    array([ True,  True], dtype=bool)
-
-    """)
-
-add_newdoc('numpy.core.umath', 'bitwise_xor',
-    """
-    Compute bit-wise XOR of two arrays, element-wise.
-
-    When calculating the bit-wise XOR between two elements, ``x`` and ``y``,
-    each element is first converted to its binary representation (which works
-    just like the decimal system, only now we're using 2 instead of 10):
-
-    .. math:: x = \\sum_{i=0}^{W-1} a_i \\cdot 2^i\\\\
-              y = \\sum_{i=0}^{W-1} b_i \\cdot 2^i,
-
-    where ``W`` is the bit-width of the type (i.e., 8 for a byte or uint8),
-    and each :math:`a_i` and :math:`b_j` is either 0 or 1.  For example, 13
-    is represented as ``00001101``, which translates to
-    :math:`2^4 + 2^3 + 2`.
-
-    The bit-wise operator is the result of
-
-    .. math:: z = \\sum_{i=0}^{i=W-1} (a_i \\oplus b_i) \\cdot 2^i,
-
-    where :math:`\\oplus` is the XOR operator, which yields one whenever
-    either :math:`a_i` or :math:`b_i` is 1, but not both.
-
-    Parameters
-    ----------
-    x1, x2 : array_like
-        Only integer types are handled (including booleans).
-
-    Returns
-    -------
-    out : ndarray
-        Result.
-
-    See Also
-    --------
-    bitwise_and, bitwise_or
-    logical_xor
-    binary_repr :
-        Return the binary representation of the input number as a string.
-
-    Examples
-    --------
-    We've seen that 13 is represented by ``00001101``.  Similary, 17 is
-    represented by ``00010001``.  The bit-wise XOR of 13 and 17 is
-    therefore ``00011100``, or 28:
-
-    >>> np.bitwise_xor(13, 17)
-    28
-    >>> np.binary_repr(28)
-    '11100'
-
-    >>> np.bitwise_xor(31, 5)
-    26
-    >>> np.bitwise_xor([31,3], 5)
-    array([26,  6])
-
-    >>> np.bitwise_xor([31,3], [5,6])
-    array([26,  5])
-    >>> np.bitwise_xor([True, True], [False, True])
-    array([ True, False], dtype=bool)
-
-    """)
-
-add_newdoc('numpy.core.umath', 'ceil',
-    """
-    Return the ceiling of the input, element-wise.
-
-    The ceil of the scalar `x` is the smallest integer `i`, such that
-    `i >= x`.  It is often denoted as :math:`\\lceil x \\rceil`.
-
-    Parameters
-    ----------
-    x : array_like
-        Input data.
-
-    Returns
-    -------
-    y : {ndarray, scalar}
-        The ceiling of each element in `x`.
-
-    Examples
-    --------
-    >>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0])
-    >>> np.ceil(a)
-    array([-1., -1., -0.,  1.,  2.,  2.,  2.])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'trunc',
-    """
-    Return the truncated value of the input, element-wise.
-
-    The truncated value of the scalar `x` is the nearest integer `i`, such
-    that i is not larger than x amplitude
-
-    Parameters
-    ----------
-    x : array_like
-        Input data.
-
-    Returns
-    -------
-    y : {ndarray, scalar}
-        The truncated value of each element in `x`.
-
-    Examples
-    --------
-    >>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0])
-    >>> np.ceil(a)
-    array([-1., -1., -0.,  0.,  1.,  1.,  2.])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'conjugate',
-    """
-    Return the complex conjugate, element-wise.
-
-    The complex conjugate of a complex number is obtained by changing the
-    sign of its imaginary part.
-
-    Parameters
-    ----------
-    x : array_like
-        Input value.
-
-    Returns
-    -------
-    y : ndarray
-        The complex conjugate of `x`, with same dtype as `y`.
-
-    Examples
-    --------
-    >>> np.conjugate(1+2j)
-    (1-2j)
-
-    """)
-
-add_newdoc('numpy.core.umath', 'cos',
-    """
-    Cosine elementwise.
-
-    Parameters
-    ----------
-    x : array_like
-        Input array in radians.
-
-    Returns
-    -------
-    out : ndarray
-        Output array of same shape as `x`.
-
-    Examples
-    --------
-    >>> np.cos(np.array([0, np.pi/2, np.pi]))
-    array([  1.00000000e+00,   6.12303177e-17,  -1.00000000e+00])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'cosh',
-    """
-    Hyperbolic cosine, element-wise.
-
-    Equivalent to ``1/2 * (np.exp(x) + np.exp(-x))`` and ``np.cos(1j*x)``.
-
-    Parameters
-    ----------
-    x : array_like
-        Input array.
-
-    Returns
-    -------
-    out : ndarray
-        Output array of same shape as `x`.
-
-    Examples
-    --------
-    >>> np.cosh(0)
-    1.0
-
-    The hyperbolic cosine describes the shape of a hanging cable:
-
-    >>> import matplotlib.pyplot as plt
-    >>> x = np.linspace(-4, 4, 1000)
-    >>> plt.plot(x, np.cosh(x))
-    >>> plt.show()
-
-    """)
-
-add_newdoc('numpy.core.umath', 'degrees',
-    """
-    Convert angles from radians to degrees.
-
-    Parameters
-    ----------
-    x : array_like
-      Angle in radians.
-
-    Returns
-    -------
-    y : ndarray
-      The corresponding angle in degrees.
-
-
-    See Also
-    --------
-    radians : Convert angles from degrees to radians.
-    unwrap : Remove large jumps in angle by wrapping.
-
-    Notes
-    -----
-    degrees(x) is ``180 * x / pi``.
-
-    Examples
-    --------
-    >>> np.degrees(np.pi/2)
-    90.0
-
-    """)
-
-add_newdoc('numpy.core.umath', 'divide',
-    """
-    Divide arguments element-wise.
-
-    Parameters
-    ----------
-    x1 : array_like
-        Dividend array.
-    x2 : array_like
-        Divisor array.
-
-    Returns
-    -------
-    y : {ndarray, scalar}
-        The quotient `x1/x2`, element-wise. Returns a scalar if
-        both  `x1` and `x2` are scalars.
-
-    See Also
-    --------
-    seterr : Set whether to raise or warn on overflow, underflow and division
-             by zero.
-
-    Notes
-    -----
-    Equivalent to `x1` / `x2` in terms of array-broadcasting.
-
-    Behavior on division by zero can be changed using `seterr`.
-
-    When both `x1` and `x2` are of an integer type, `divide` will return
-    integers and throw away the fractional part. Moreover, division by zero
-    always yields zero in integer arithmetic.
-
-    Examples
-    --------
-    >>> np.divide(2.0, 4.0)
-    0.5
-    >>> x1 = np.arange(9.0).reshape((3, 3))
-    >>> x2 = np.arange(3.0)
-    >>> np.divide(x1, x2)
-    array([[ NaN,  1. ,  1. ],
-           [ Inf,  4. ,  2.5],
-           [ Inf,  7. ,  4. ]])
-
-    Note the behavior with integer types:
-
-    >>> np.divide(2, 4)
-    0
-    >>> np.divide(2, 4.)
-    0.5
-
-    Division by zero always yields zero in integer arithmetic, and does not
-    raise an exception or a warning:
-
-    >>> np.divide(np.array([0, 1], dtype=int), np.array([0, 0], dtype=int))
-    array([0, 0])
-
-    Division by zero can, however, be caught using `seterr`:
-
-    >>> old_err_state = np.seterr(divide='raise')
-    >>> np.divide(1, 0)
-    Traceback (most recent call last):
-      File "<stdin>", line 1, in <module>
-    FloatingPointError: divide by zero encountered in divide
-
-    >>> ignored_states = np.seterr(**old_err_state)
-    >>> np.divide(1, 0)
-    0
-
-    """)
-
-add_newdoc('numpy.core.umath', 'equal',
-    """
-    Returns elementwise x1 == x2 in a bool array.
-
-    Parameters
-    ----------
-    x1, x2 : array_like
-        Input arrays of the same shape.
-
-    Returns
-    -------
-    out : boolean
-        The elementwise test `x1` == `x2`.
-
-    """)
-
-add_newdoc('numpy.core.umath', 'exp',
-    """
-    Calculate the exponential of the elements in the input array.
-
-    Parameters
-    ----------
-    x : array_like
-        Input values.
-
-    Returns
-    -------
-    out : ndarray
-        Element-wise exponential of `x`.
-
-    Notes
-    -----
-    The irrational number ``e`` is also known as Euler's number.  It is
-    approximately 2.718281, and is the base of the natural logarithm,
-    ``ln`` (this means that, if :math:`x = \\ln y = \\log_e y`,
-    then :math:`e^x = y`. For real input, ``exp(x)`` is always positive.
-
-    For complex arguments, ``x = a + ib``, we can write
-    :math:`e^x = e^a e^{ib}`.  The first term, :math:`e^a`, is already
-    known (it is the real argument, described above).  The second term,
-    :math:`e^{ib}`, is :math:`\\cos b + i \\sin b`, a function with magnitude
-    1 and a periodic phase.
-
-    References
-    ----------
-    .. [1] Wikipedia, "Exponential function",
-           http://en.wikipedia.org/wiki/Exponential_function
-    .. [2] M. Abramovitz and I. A. Stegun, "Handbook of Mathematical Functions
-           with Formulas, Graphs, and Mathematical Tables," Dover, 1964, p. 69,
-           http://www.math.sfu.ca/~cbm/aands/page_69.htm
-
-    Examples
-    --------
-    Plot the magnitude and phase of ``exp(x)`` in the complex plane:
-
-    >>> import matplotlib.pyplot as plt
-
-    >>> x = np.linspace(-2*np.pi, 2*np.pi, 100)
-    >>> xx = x + 1j * x[:, np.newaxis] # a + ib over complex plane
-    >>> out = np.exp(xx)
-
-    >>> plt.subplot(121)
-    >>> plt.imshow(np.abs(out),
-    ...            extent=[-2*np.pi, 2*np.pi, -2*np.pi, 2*np.pi])
-    >>> plt.title('Magnitude of exp(x)')
-
-    >>> plt.subplot(122)
-    >>> plt.imshow(np.angle(out),
-    ...            extent=[-2*np.pi, 2*np.pi, -2*np.pi, 2*np.pi])
-    >>> plt.title('Phase (angle) of exp(x)')
-    >>> plt.show()
-
-    """)
-
-add_newdoc('numpy.core.umath', 'expm1',
-    """
-    Return the exponential of the elements in the array minus one.
-
-    Parameters
-    ----------
-    x : array_like
-        Input values.
-
-    Returns
-    -------
-    out : ndarray
-        Element-wise exponential minus one: ``out=exp(x)-1``.
-
-    See Also
-    --------
-    log1p : ``log(1+x)``, the inverse of expm1.
-
-
-    Notes
-    -----
-    This function provides greater precision than using ``exp(x)-1``
-    for small values of `x`.
-
-    Examples
-    --------
-    Since the series expansion of ``e**x = 1 + x + x**2/2! + x**3/3! + ...``,
-    for very small `x` we expect that ``e**x -1 ~ x + x**2/2``:
-
-    >>> np.expm1(1e-10)
-    1.00000000005e-10
-    >>> np.exp(1e-10) - 1
-    1.000000082740371e-10
-
-    """)
-
-add_newdoc('numpy.core.umath', 'fabs',
-    """
-    Compute the absolute values elementwise.
-
-    This function returns the absolute values (positive magnitude) of the data
-    in `x`. Complex values are not handled, use `absolute` to find the
-    absolute values of complex data.
-
-    Parameters
-    ----------
-    x : array_like
-        The array of numbers for which the absolute values are required. If
-        `x` is a scalar, the result `y` will also be a scalar.
-
-    Returns
-    -------
-    y : {ndarray, scalar}
-        The absolute values of `x`, the returned values are always floats.
-
-    See Also
-    --------
-    absolute : Absolute values including `complex` types.
-
-    Examples
-    --------
-    >>> np.fabs(-1)
-    1.0
-    >>> np.fabs([-1.2, 1.2])
-    array([ 1.2,  1.2])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'floor',
-    """
-    Return the floor of the input, element-wise.
-
-    The floor of the scalar `x` is the largest integer `i`, such that
-    `i <= x`.  It is often denoted as :math:`\\lfloor x \\rfloor`.
-
-    Parameters
-    ----------
-    x : array_like
-        Input data.
-
-    Returns
-    -------
-    y : {ndarray, scalar}
-        The floor of each element in `x`.
-
-    Notes
-    -----
-    Some spreadsheet programs calculate the "floor-towards-zero", in other
-    words ``floor(-2.5) == -2``.  NumPy, however, uses the a definition of
-    `floor` such that `floor(-2.5) == -3``.
-
-    Examples
-    --------
-    >>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0])
-    >>> np.floor(a)
-    array([-2., -2., -1.,  0.,  1.,  1.,  2.])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'floor_divide',
-    """
-    Return the largest integer smaller or equal to the division of the inputs.
-
-    Parameters
-    ----------
-    x1 : array_like
-        Numerator.
-    x2 : array_like
-        Denominator.
-
-    Returns
-    -------
-    y : ndarray
-        y = floor(`x1`/`x2`)
-
-
-    See Also
-    --------
-    divide : Standard division.
-    floor : Round a number to the nearest integer toward minus infinity.
-    ceil : Round a number to the nearest integer toward infinity.
-
-    Examples
-    --------
-    >>> np.floor_divide(7,3)
-    2
-    >>> np.floor_divide([1., 2., 3., 4.], 2.5)
-    array([ 0.,  0.,  1.,  1.])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'fmod',
-    """
-    Return the remainder of division.
-
-    This is the NumPy implementation of the C modulo operator `%`.
-
-    Parameters
-    ----------
-    x1 : array_like
-      Dividend.
-    x2 : array_like
-      Divisor.
-
-    Returns
-    -------
-    y : array_like
-      The remainder of the division of `x1` by `x2`.
-
-    See Also
-    --------
-    mod : Modulo operation where the quotient is `floor(x1,x2)`.
-
-    Notes
-    -----
-    The result of the modulo operation for negative dividend and divisors is
-    bound by conventions. In `fmod`, the sign of the remainder is the sign of
-    the dividend, and the sign of the divisor has no influence on the results.
-
-    Examples
-    --------
-    >>> np.fmod([-3, -2, -1, 1, 2, 3], 2)
-    array([-1,  0, -1,  1,  0,  1])
-
-    >>> np.mod([-3, -2, -1, 1, 2, 3], 2)
-    array([1, 0, 1, 1, 0, 1])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'greater',
-    """
-    Return (x1 > x2) element-wise.
-
-    Parameters
-    ----------
-    x1, x2 : array_like
-        Input arrays.
-
-    Returns
-    -------
-    Out : {ndarray, bool}
-        Output array of bools, or a single bool if `x1` and `x2` are scalars.
-
-    See Also
-    --------
-    greater_equal
-
-    Examples
-    --------
-    >>> np.greater([4,2],[2,2])
-    array([ True, False], dtype=bool)
-
-    If the inputs are ndarrays, then np.greater is equivalent to '>'.
-
-    >>> a = np.array([4,2])
-    >>> b = np.array([2,2])
-    >>> a > b
-    array([ True, False], dtype=bool)
-
-    """)
-
-add_newdoc('numpy.core.umath', 'greater_equal',
-    """
-    Element-wise True if first array is greater or equal than second array.
-
-    Parameters
-    ----------
-    x1, x2 : array_like
-        Input arrays.
-
-    Returns
-    -------
-    out : ndarray, bool
-        Output array.
-
-    See Also
-    --------
-    greater, less, less_equal, equal
-
-    Examples
-    --------
-    >>> np.greater_equal([4,2],[2,2])
-    array([ True, True], dtype=bool)
-
-    """)
-
-add_newdoc('numpy.core.umath', 'hypot',
-    """
-    Given two sides of a right triangle, return its hypotenuse.
-
-    Parameters
-    ----------
-    x : array_like
-      Base of the triangle.
-    y : array_like
-      Height of the triangle.
-
-    Returns
-    -------
-    z : ndarray
-      Hypotenuse of the triangle: sqrt(x**2 + y**2)
-
-    Examples
-    --------
-    >>> np.hypot(3,4)
-    5.0
-
-    """)
-
-add_newdoc('numpy.core.umath', 'invert',
-    """
-    Compute bit-wise inversion, or bit-wise NOT, element-wise.
-
-    When calculating the bit-wise NOT of an element ``x``, each element is
-    first converted to its binary representation (which works
-    just like the decimal system, only now we're using 2 instead of 10):
-
-    .. math:: x = \\sum_{i=0}^{W-1} a_i \\cdot 2^i
-
-    where ``W`` is the bit-width of the type (i.e., 8 for a byte or uint8),
-    and each :math:`a_i` is either 0 or 1.  For example, 13 is represented
-    as ``00001101``, which translates to :math:`2^4 + 2^3 + 2`.
-
-    The bit-wise operator is the result of
-
-    .. math:: z = \\sum_{i=0}^{i=W-1} (\\lnot a_i) \\cdot 2^i,
-
-    where :math:`\\lnot` is the NOT operator, which yields 1 whenever
-    :math:`a_i` is 0 and yields 0 whenever :math:`a_i` is 1.
-
-    For signed integer inputs, the two's complement is returned.
-    In a two's-complement system negative numbers are represented by the two's
-    complement of the absolute value. This is the most common method of
-    representing signed integers on computers [1]_. A N-bit two's-complement
-    system can represent every integer in the range
-    :math:`-2^{N-1}` to :math:`+2^{N-1}-1`.
-
-    Parameters
-    ----------
-    x1 : ndarray
-        Only integer types are handled (including booleans).
-
-    Returns
-    -------
-    out : ndarray
-        Result.
-
-    See Also
-    --------
-    bitwise_and, bitwise_or, bitwise_xor
-    logical_not
-    binary_repr :
-        Return the binary representation of the input number as a string.
-
-    Notes
-    -----
-    `bitwise_not` is an alias for `invert`:
-
-    >>> np.bitwise_not is np.invert
-    True
-
-    References
-    ----------
-    .. [1] Wikipedia, "Two's complement",
-        http://en.wikipedia.org/wiki/Two's_complement
-
-    Examples
-    --------
-    We've seen that 13 is represented by ``00001101``.
-    The invert or bit-wise NOT of 13 is then:
-
-    >>> np.invert(np.array([13], dtype=uint8))
-    array([242], dtype=uint8)
-    >>> np.binary_repr(x, width=8)
-    '00001101'
-    >>> np.binary_repr(242, width=8)
-    '11110010'
-
-    The result depends on the bit-width:
-
-    >>> np.invert(np.array([13], dtype=uint16))
-    array([65522], dtype=uint16)
-    >>> np.binary_repr(x, width=16)
-    '0000000000001101'
-    >>> np.binary_repr(65522, width=16)
-    '1111111111110010'
-
-    When using signed integer types the result is the two's complement of
-    the result for the unsigned type:
-
-    >>> np.invert(np.array([13], dtype=int8))
-    array([-14], dtype=int8)
-    >>> np.binary_repr(-14, width=8)
-    '11110010'
-
-    Booleans are accepted as well:
-
-    >>> np.invert(array([True, False]))
-    array([False,  True], dtype=bool)
-
-    """)
-
-add_newdoc('numpy.core.umath', 'isfinite',
-    """
-    Returns True for each element that is a finite number.
-
-    Shows which elements of the input are finite (not infinity or not
-    Not a Number).
-
-    Parameters
-    ----------
-    x : array_like
-      Input values.
-    y : array_like, optional
-      A boolean array with the same shape and type as `x` to store the result.
-
-    Returns
-    -------
-    y : ndarray, bool
-      For scalar input data, the result is a new numpy boolean with value True
-      if the input data is finite; otherwise the value is False (input is
-      either positive infinity, negative infinity or Not a Number).
-
-      For array input data, the result is an numpy boolean array with the same
-      dimensions as the input and the values are True if the corresponding
-      element of the input is finite; otherwise the values are False (element
-      is either positive infinity, negative infinity or Not a Number). If the
-      second argument is supplied then an numpy integer array is returned with
-      values 0 or 1 corresponding to False and True, respectively.
-
-    See Also
-    --------
-    isinf : Shows which elements are negative or negative infinity.
-    isneginf : Shows which elements are negative infinity.
-    isposinf : Shows which elements are positive infinity.
-    isnan : Shows which elements are Not a Number (NaN).
-
-
-    Notes
-    -----
-    Not a Number, positive infinity and negative infinity are considered
-    to be non-finite.
-
-    Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic
-    (IEEE 754). This means that Not a Number is not equivalent to infinity.
-    Also that positive infinity is not equivalent to negative infinity. But
-    infinity is equivalent to positive infinity.
-
-    Errors result if second argument is also supplied with scalar input or
-    if first and second arguments have different shapes.
-
-    Examples
-    --------
-    >>> np.isfinite(1)
-    True
-    >>> np.isfinite(0)
-    True
-    >>> np.isfinite(np.nan)
-    False
-    >>> np.isfinite(np.inf)
-    False
-    >>> np.isfinite(np.NINF)
-    False
-    >>> np.isfinite([np.log(-1.),1.,np.log(0)])
-    array([False,  True, False], dtype=bool)
-    >>> x=np.array([-np.inf, 0., np.inf])
-    >>> y=np.array([2,2,2])
-    >>> np.isfinite(x,y)
-    array([0, 1, 0])
-    >>> y
-    array([0, 1, 0])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'isinf',
-    """
-    Shows which elements of the input are positive or negative infinity.
-    Returns a numpy boolean scalar or array resulting from an element-wise test
-    for positive or negative infinity.
-
-    Parameters
-    ----------
-    x : array_like
-      input values
-    y : array_like, optional
-      An array with the same shape as `x` to store the result.
-
-    Returns
-    -------
-    y : {ndarray, bool}
-      For scalar input data, the result is a new numpy boolean with value True
-      if the input data is positive or negative infinity; otherwise the value
-      is False.
-
-      For array input data, the result is an numpy boolean array with the same
-      dimensions as the input and the values are True if the corresponding
-      element of the input is positive or negative infinity; otherwise the
-      values are False.  If the second argument is supplied then an numpy
-      integer array is returned with values 0 or 1 corresponding to False and
-      True, respectively.
-
-    See Also
-    --------
-    isneginf : Shows which elements are negative infinity.
-    isposinf : Shows which elements are positive infinity.
-    isnan : Shows which elements are Not a Number (NaN).
-    isfinite: Shows which elements are not: Not a number, positive and
-             negative infinity
-
-    Notes
-    -----
-    Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic
-    (IEEE 754). This means that Not a Number is not equivalent to infinity.
-    Also that positive infinity is not equivalent to negative infinity. But
-    infinity is equivalent to positive infinity.
-
-    Errors result if second argument is also supplied with scalar input or
-    if first and second arguments have different shapes.
-
-    Numpy's definitions for positive infinity (PINF) and negative infinity
-    (NINF) may be change in the future versions.
-
-    Examples
-    --------
-    >>> np.isinf(np.inf)
-    True
-    >>> np.isinf(np.nan)
-    False
-    >>> np.isinf(np.NINF)
-    True
-    >>> np.isinf([np.inf, -np.inf, 1.0, np.nan])
-    array([ True,  True, False, False], dtype=bool)
-    >>> x=np.array([-np.inf, 0., np.inf])
-    >>> y=np.array([2,2,2])
-    >>> np.isinf(x,y)
-    array([1, 0, 1])
-    >>> y
-    array([1, 0, 1])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'isnan',
-    """
-    Returns a numpy boolean scalar or array resulting from an element-wise test
-    for Not a Number (NaN).
-
-    Parameters
-    ----------
-    x : array_like
-      input data.
-
-    Returns
-    -------
-    y : {ndarray, bool}
-      For scalar input data, the result is a new numpy boolean with value True
-      if the input data is NaN; otherwise the value is False.
-
-      For array input data, the result is an numpy boolean array with the same
-      dimensions as the input and the values are True if the corresponding
-      element of the input is Not a Number; otherwise the values are False.
-
-    See Also
-    --------
-    isinf : Tests for infinity.
-    isneginf : Tests for negative infinity.
-    isposinf : Tests for positive infinity.
-    isfinite : Shows which elements are not: Not a number, positive infinity
-               and negative infinity
-
-    Notes
-    -----
-    Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic
-    (IEEE 754). This means that Not a Number is not equivalent to infinity.
-
-    Examples
-    --------
-    >>> np.isnan(np.nan)
-    True
-    >>> np.isnan(np.inf)
-    False
-    >>> np.isnan([np.log(-1.),1.,np.log(0)])
-    array([ True, False, False], dtype=bool)
-
-    """)
-
-add_newdoc('numpy.core.umath', 'left_shift',
-    """
-    Shift the bits of an integer to the left.
-
-    Bits are shifted to the left by appending `x2` 0s at the right of `x1`.
-    Since the internal representation of numbers is in binary format, this
-    operation is equivalent to multiplying `x1` by ``2**x2``.
-
-    Parameters
-    ----------
-    x1 : array_like of integer type
-        Input values.
-    x2 : array_like of integer type
-        Number of zeros to append to `x1`.
-
-    Returns
-    -------
-    out : array of integer type
-        Return `x1` with bits shifted `x2` times to the left.
-
-    See Also
-    --------
-    right_shift : Shift the bits of an integer to the right.
-    binary_repr : Return the binary representation of the input number
-        as a string.
-
-    Examples
-    --------
-    >>> np.left_shift(5, [1,2,3])
-    array([10, 20, 40])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'less',
-    """
-    Returns (x1 < x2) element-wise.
-
-    Parameters
-    ----------
-    x1, x2 : array_like
-        Input arrays.
-
-    Returns
-    -------
-    Out : {ndarray, bool}
-        Output array of bools, or a single bool if `x1` and `x2` are scalars.
-
-    See Also
-    --------
-    less_equal
-
-    Examples
-    --------
-    >>> np.less([1,2],[2,2])
-    array([ True, False], dtype=bool)
-
-    """)
-
-add_newdoc('numpy.core.umath', 'less_equal',
-    """
-    Returns (x1 <= x2) element-wise.
-
-    Parameters
-    ----------
-    x1, x2 : array_like
-        Input arrays.
-
-    Returns
-    -------
-    Out : {ndarray, bool}
-        Output array of bools, or a single bool if `x1` and `x2` are scalars.
-
-    See Also
-    --------
-    less
-
-    Examples
-    --------
-    >>> np.less_equal([1,2,3],[2,2,2])
-    array([ True,  True, False], dtype=bool)
-
-    """)
-
-add_newdoc('numpy.core.umath', 'log',
-    """
-    Natural logarithm, element-wise.
-
-    The natural logarithm `log` is the inverse of the exponential function,
-    so that `log(exp(x)) = x`. The natural logarithm is logarithm in base `e`.
-
-    Parameters
-    ----------
-    x : array_like
-        Input value.
-
-    Returns
-    -------
-    y : ndarray
-        The natural logarithm of `x`, element-wise.
-
-    See Also
-    --------
-    log10, log2, log1p
-
-    Notes
-    -----
-    Logarithm is a multivalued function: for each `x` there is an infinite
-    number of `z` such that `exp(z) = x`. The convention is to return the `z`
-    whose imaginary part lies in `[-pi, pi]`.
-
-    For real-valued input data types, `log` always returns real output. For
-    each value that cannot be expressed as a real number or infinity, it
-    yields ``nan`` and sets the `invalid` floating point error flag.
-
-    For complex-valued input, `log` is a complex analytical function that
-    has a branch cut `[-inf, 0]` and is continuous from above on it. `log`
-    handles the floating-point negative zero as an infinitesimal negative
-    number, conforming to the C99 standard.
-
-    References
-    ----------
-    .. [1] M. Abramowitz and I.A. Stegun, "Handbook of Mathematical Functions",
-           10th printing, 1964, pp. 67. http://www.math.sfu.ca/~cbm/aands/
-    .. [2] Wikipedia, "Logarithm". http://en.wikipedia.org/wiki/Logarithm
-
-    Examples
-    --------
-    >>> np.log([1, np.e, np.e**2, 0])
-    array([  0.,   1.,   2., -Inf])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'log10',
-    """
-    Compute the logarithm in base 10 element-wise.
-
-    Parameters
-    ----------
-    x : array_like
-        Input values.
-
-    Returns
-    -------
-    y : ndarray
-        Base-10 logarithm of `x`.
-
-    Notes
-    -----
-    Logarithm is a multivalued function: for each `x` there is an infinite
-    number of `z` such that `10**z = x`. The convention is to return the `z`
-    whose imaginary part lies in `[-pi, pi]`.
-
-    For real-valued input data types, `log10` always returns real output. For
-    each value that cannot be expressed as a real number or infinity, it
-    yields ``nan`` and sets the `invalid` floating point error flag.
-
-    For complex-valued input, `log10` is a complex analytical function that
-    has a branch cut `[-inf, 0]` and is continuous from above on it. `log10`
-    handles the floating-point negative zero as an infinitesimal negative
-    number, conforming to the C99 standard.
-
-    References
-    ----------
-    .. [1] M. Abramowitz and I.A. Stegun, "Handbook of Mathematical Functions",
-           10th printing, 1964, pp. 67. http://www.math.sfu.ca/~cbm/aands/
-    .. [2] Wikipedia, "Logarithm". http://en.wikipedia.org/wiki/Logarithm
-
-    Examples
-    --------
-    >>> np.log10([1.e-15,-3.])
-    array([-15.,  NaN])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'log1p',
-    """
-    `log(1 + x)` in base `e`, elementwise.
-
-    Parameters
-    ----------
-    x : array_like
-        Input values.
-
-    Returns
-    -------
-    y : ndarray
-        Natural logarithm of `1 + x`, elementwise.
-
-    Notes
-    -----
-    For real-valued input, `log1p` is accurate also for `x` so small
-    that `1 + x == 1` in floating-point accuracy.
-
-    Logarithm is a multivalued function: for each `x` there is an infinite
-    number of `z` such that `exp(z) = 1 + x`. The convention is to return
-    the `z` whose imaginary part lies in `[-pi, pi]`.
-
-    For real-valued input data types, `log1p` always returns real output. For
-    each value that cannot be expressed as a real number or infinity, it
-    yields ``nan`` and sets the `invalid` floating point error flag.
-
-    For complex-valued input, `log1p` is a complex analytical function that
-    has a branch cut `[-inf, -1]` and is continuous from above on it. `log1p`
-    handles the floating-point negative zero as an infinitesimal negative
-    number, conforming to the C99 standard.
-
-    References
-    ----------
-    .. [1] M. Abramowitz and I.A. Stegun, "Handbook of Mathematical Functions",
-           10th printing, 1964, pp. 67. http://www.math.sfu.ca/~cbm/aands/
-    .. [2] Wikipedia, "Logarithm". http://en.wikipedia.org/wiki/Logarithm
-
-    Examples
-    --------
-    >>> np.log1p(1e-99)
-    1e-99
-    >>> np.log(1 + 1e-99)
-    0.0
-
-    """)
-
-add_newdoc('numpy.core.umath', 'logical_and',
-    """
-    Compute the truth value of x1 AND x2 elementwise.
-
-    Parameters
-    ----------
-    x1, x2 : array_like
-        Logical AND is applied to the elements of `x1` and `x2`.
-        They have to be of the same shape.
-
-
-    Returns
-    -------
-    y : {ndarray, bool}
-        Boolean result with the same shape as `x1` and `x2` of the logical
-        AND operation on elements of `x1` and `x2`.
-
-    See Also
-    --------
-    logical_or, logical_not, logical_xor
-    bitwise_and
-
-    Examples
-    --------
-    >>> np.logical_and(True, False)
-    False
-    >>> np.logical_and([True, False], [False, False])
-    array([False, False], dtype=bool)
-
-    >>> x = np.arange(5)
-    >>> np.logical_and(x>1, x<4)
-    array([False, False,  True,  True, False], dtype=bool)
-
-    """)
-
-add_newdoc('numpy.core.umath', 'logical_not',
-    """
-    Compute the truth value of NOT x elementwise.
-
-    Parameters
-    ----------
-    x : array_like
-        Logical NOT is applied to the elements of `x`.
-
-    Returns
-    -------
-    y : {ndarray, bool}
-        Boolean result with the same shape as `x` of the NOT operation
-        on elements of `x`.
-
-    See Also
-    --------
-    logical_and, logical_or, logical_xor
-
-    Examples
-    --------
-    >>> np.logical_not(3)
-    False
-    >>> np.logical_not([True, False, 0, 1])
-    array([False,  True,  True, False], dtype=bool)
-
-    >>> x = np.arange(5)
-    >>> np.logical_not(x<3)
-    array([False, False, False,  True,  True], dtype=bool)
-
-    """)
-
-add_newdoc('numpy.core.umath', 'logical_or',
-    """
-    Compute the truth value of x1 OR x2 elementwise.
-
-    Parameters
-    ----------
-    x1, x2 : array_like
-        Logical OR is applied to the elements of `x1` and `x2`.
-        They have to be of the same shape.
-
-    Returns
-    -------
-    y : {ndarray, bool}
-        Boolean result with the same shape as `x1` and `x2` of the logical
-        OR operation on elements of `x1` and `x2`.
-
-    See Also
-    --------
-    logical_and, logical_not, logical_xor
-    bitwise_or
-
-    Examples
-    --------
-    >>> np.logical_or(True, False)
-    True
-    >>> np.logical_or([True, False], [False, False])
-    array([ True, False], dtype=bool)
-
-    >>> x = np.arange(5)
-    >>> np.logical_or(x < 1, x > 3)
-    array([ True, False, False, False,  True], dtype=bool)
-
-    """)
-
-add_newdoc('numpy.core.umath', 'logical_xor',
-    """
-    Compute the truth value of x1 XOR x2 elementwise.
-
-    Parameters
-    ----------
-    x1, x2 : array_like
-        Logical XOR is applied to the elements of `x1` and `x2`.
-        They have to be of the same shape.
-
-    Returns
-    -------
-    y : {ndarray, bool}
-        Boolean result with the same shape as `x1` and `x2` of the logical
-        XOR operation on elements of `x1` and `x2`.
-
-    See Also
-    --------
-    logical_and, logical_or, logical_not
-    bitwise_xor
-
-    Examples
-    --------
-    >>> np.logical_xor(True, False)
-    True
-    >>> np.logical_xor([True, True, False, False], [True, False, True, False])
-    array([False,  True,  True, False], dtype=bool)
-
-    >>> x = np.arange(5)
-    >>> np.logical_xor(x < 1, x > 3)
-    array([ True, False, False, False,  True], dtype=bool)
-
-    """)
-
-add_newdoc('numpy.core.umath', 'maximum',
-    """
-    Element-wise maximum of array elements.
-
-    Compare two arrays and returns a new array containing
-    the element-wise maxima.
-
-    Parameters
-    ----------
-    x1, x2 : array_like
-        The arrays holding the elements to be compared.
-
-    Returns
-    -------
-    y : {ndarray, scalar}
-        The maximum of `x1` and `x2`, element-wise.  Returns scalar if
-        both  `x1` and `x2` are scalars.
-
-    See Also
-    --------
-    minimum :
-      element-wise minimum
-
-    Notes
-    -----
-    Equivalent to ``np.where(x1 > x2, x1, x2)`` but faster and does proper
-    broadcasting.
-
-    Examples
-    --------
-    >>> np.maximum([2, 3, 4], [1, 5, 2])
-    array([2, 5, 4])
-
-    >>> np.maximum(np.eye(2), [0.5, 2])
-    array([[ 1. ,  2. ],
-           [ 0.5,  2. ]])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'minimum',
-    """
-    Element-wise minimum of array elements.
-
-    Compare two arrays and returns a new array containing
-    the element-wise minima.
-
-    Parameters
-    ----------
-    x1, x2 : array_like
-        The arrays holding the elements to be compared.
-
-    Returns
-    -------
-    y : {ndarray, scalar}
-        The minimum of `x1` and `x2`, element-wise.  Returns scalar if
-        both  `x1` and `x2` are scalars.
-
-    See Also
-    --------
-    maximum :
-        element-wise maximum
-
-    Notes
-    -----
-    Equivalent to ``np.where(x1 < x2, x1, x2)`` but faster and does proper
-    broadcasting.
-
-    Examples
-    --------
-    >>> np.minimum([2, 3, 4], [1, 5, 2])
-    array([1, 3, 2])
-
-    >>> np.minimum(np.eye(2), [0.5, 2])
-    array([[ 0.5,  0. ],
-           [ 0. ,  1. ]])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'modf',
-    """
-    Return the fractional and integral part of a number.
-
-    The fractional and integral parts are negative if the given number is
-    negative.
-
-    Parameters
-    ----------
-    x : array_like
-        Input number.
-
-    Returns
-    -------
-    y1 : ndarray
-        Fractional part of `x`.
-    y2 : ndarray
-        Integral part of `x`.
-
-    Examples
-    --------
-    >>> np.modf(2.5)
-    (0.5, 2.0)
-    >>> np.modf(-.4)
-    (-0.40000000000000002, -0.0)
-
-    """)
-
-add_newdoc('numpy.core.umath', 'multiply',
-    """
-    Multiply arguments elementwise.
-
-    Parameters
-    ----------
-    x1, x2 : array_like
-        The arrays to be multiplied.
-
-    Returns
-    -------
-    y : ndarray
-        The product of `x1` and `x2`, elementwise. Returns a scalar if
-        both  `x1` and `x2` are scalars.
-
-    Notes
-    -----
-    Equivalent to `x1` * `x2` in terms of array-broadcasting.
-
-    Examples
-    --------
-    >>> np.multiply(2.0, 4.0)
-    8.0
-
-    >>> x1 = np.arange(9.0).reshape((3, 3))
-    >>> x2 = np.arange(3.0)
-    >>> np.multiply(x1, x2)
-    array([[  0.,   1.,   4.],
-           [  0.,   4.,  10.],
-           [  0.,   7.,  16.]])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'negative',
-    """
-    Returns an array with the negative of each element of the original array.
-
-    Parameters
-    ----------
-    x : {array_like, scalar}
-        Input array.
-
-    Returns
-    -------
-    y : {ndarray, scalar}
-        Returned array or scalar `y=-x`.
-
-    Examples
-    --------
-    >>> np.negative([1.,-1.])
-    array([-1.,  1.])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'not_equal',
-    """
-    Return (x1 != x2) element-wise.
-
-    Parameters
-    ----------
-    x1, x2 : array_like
-      Input arrays.
-    out : ndarray, optional
-      A placeholder the same shape as `x1` to store the result.
-
-    Returns
-    -------
-    not_equal : ndarray bool, scalar bool
-      For each element in `x1, x2`, return True if `x1` is not equal
-      to `x2` and False otherwise.
-
-
-    See Also
-    --------
-    equal, greater, greater_equal, less, less_equal
-
-    Examples
-    --------
-    >>> np.not_equal([1.,2.], [1., 3.])
-    array([False,  True], dtype=bool)
-
-    """)
-
-add_newdoc('numpy.core.umath', 'ones_like',
-    """
-    Returns an array of ones with the same shape and type as a given array.
-
-    Equivalent to ``a.copy().fill(1)``.
-
-    Please refer to the documentation for `zeros_like`.
-
-    See Also
-    --------
-    zeros_like
-
-    Examples
-    --------
-    >>> a = np.array([[1, 2, 3], [4, 5, 6]])
-    >>> np.ones_like(a)
-    array([[1, 1, 1],
-           [1, 1, 1]])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'power',
-    """
-    Returns element-wise base array raised to power from second array.
-
-    Raise each base in `x1` to the power of the exponents in `x2`. This
-    requires that `x1` and `x2` must be broadcastable to the same shape.
-
-    Parameters
-    ----------
-    x1 : array_like
-        The bases.
-    x2 : array_like
-        The exponents.
-
-    Returns
-    -------
-    y : ndarray
-        The bases in `x1` raised to the exponents in `x2`.
-
-    Examples
-    --------
-    Cube each element in a list.
-
-    >>> x1 = range(6)
-    >>> x1
-    [0, 1, 2, 3, 4, 5]
-    >>> np.power(x1, 3)
-    array([  0,   1,   8,  27,  64, 125])
-
-    Raise the bases to different exponents.
-
-    >>> x2 = [1.0, 2.0, 3.0, 3.0, 2.0, 1.0]
-    >>> np.power(x1, x2)
-    array([  0.,   1.,   8.,  27.,  16.,   5.])
-
-    The effect of broadcasting.
-
-    >>> x2 = np.array([[1, 2, 3, 3, 2, 1], [1, 2, 3, 3, 2, 1]])
-    >>> x2
-    array([[1, 2, 3, 3, 2, 1],
-           [1, 2, 3, 3, 2, 1]])
-    >>> np.power(x1, x2)
-    array([[ 0,  1,  8, 27, 16,  5],
-           [ 0,  1,  8, 27, 16,  5]])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'radians',
-    """
-    Convert angles from degrees to radians.
-
-    Parameters
-    ----------
-    x : array_like
-      Angles in degrees.
-
-    Returns
-    -------
-    y : ndarray
-      The corresponding angle in radians.
-
-    See Also
-    --------
-    degrees : Convert angles from radians to degrees.
-    unwrap : Remove large jumps in angle by wrapping.
-
-    Notes
-    -----
-    ``radians(x)`` is ``x * pi / 180``.
-
-    Examples
-    --------
-    >>> np.radians(180)
-    3.1415926535897931
-
-    """)
-
-add_newdoc('numpy.core.umath', 'reciprocal',
-    """
-    Return element-wise reciprocal.
-
-    Parameters
-    ----------
-    x : array_like
-        Input value.
-
-    Returns
-    -------
-    y : ndarray
-        Return value.
-
-    Examples
-    --------
-    >>> reciprocal(2.)
-    0.5
-    >>> reciprocal([1, 2., 3.33])
-    array([ 1.       ,  0.5      ,  0.3003003])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'remainder',
-    """
-    Returns element-wise remainder of division.
-
-    Computes `x1 - floor(x1/x2)*x2`.
-
-    Parameters
-    ----------
-    x1 : array_like
-        Dividend array.
-    x2 : array_like
-        Divisor array.
-
-    Returns
-    -------
-    y : ndarray
-        The remainder of the quotient `x1/x2`, element-wise. Returns a scalar
-        if both  `x1` and `x2` are scalars.
-
-    See Also
-    --------
-    divide
-    floor
-
-    Notes
-    -----
-    Returns 0 when `x2` is 0.
-
-    Examples
-    --------
-    >>> np.remainder([4,7],[2,3])
-    array([0, 1])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'right_shift',
-    """
-    Shift the bits of an integer to the right.
-
-    Bits are shifted to the right by removing `x2` bits at the right of `x1`.
-    Since the internal representation of numbers is in binary format, this
-    operation is equivalent to dividing `x1` by ``2**x2``.
-
-    Parameters
-    ----------
-    x1 : array_like, int
-        Input values.
-    x2 : array_like, int
-        Number of bits to remove at the right of `x1`.
-
-    Returns
-    -------
-    out : ndarray, int
-        Return `x1` with bits shifted `x2` times to the right.
-
-    See Also
-    --------
-    left_shift : Shift the bits of an integer to the left.
-    binary_repr : Return the binary representation of the input number
-        as a string.
-
-    Examples
-    --------
-    >>> np.right_shift(10, [1,2,3])
-    array([5, 2, 1])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'rint',
-    """
-    Round elements of the array to the nearest integer.
-
-    Parameters
-    ----------
-    x : array_like
-        Input array.
-
-    Returns
-    -------
-    out : ndarray
-        Output array is same shape and type as `x`.
-
-    Examples
-    --------
-    >>> a = [-4.1, -3.6, -2.5, 0.1, 2.5, 3.1, 3.9]
-    >>> np.rint(a)
-    array([-4., -4., -2.,  0.,  2.,  3.,  4.])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'sign',
-    """
-    Returns an element-wise indication of the sign of a number.
-
-    The `sign` function returns ``-1 if x < 0, 0 if x==0, 1 if x > 0``.
-
-    Parameters
-    ----------
-    x : array_like
-      Input values.
-
-    Returns
-    -------
-    y : ndarray
-      The sign of `x`.
-
-    Examples
-    --------
-    >>> np.sign([-5., 4.5])
-    array([-1.,  1.])
-    >>> np.sign(0)
-    0
-
-    """)
-
-add_newdoc('numpy.core.umath', 'signbit',
-    """
-    Returns element-wise True where signbit is set (less than zero).
-
-    Parameters
-    ----------
-    x: array_like
-        The input value(s).
-
-    Returns
-    -------
-    out : array_like, bool
-        Output.
-
-    Examples
-    --------
-    >>> np.signbit(-1.2)
-    True
-    >>> np.signbit(np.array([1, -2.3, 2.1]))
-    array([False,  True, False], dtype=bool)
-
-    """)
-
-add_newdoc('numpy.core.umath', 'sin',
-    """
-    Trigonometric sine, element-wise.
-
-    Parameters
-    ----------
-    x : array_like
-        Angle, in radians (:math:`2 \\pi` rad equals 360 degrees).
-
-    Returns
-    -------
-    y : array_like
-        The sine of each element of x.
-
-    See Also
-    --------
-    arcsin, sinh, cos
-
-    Notes
-    -----
-    The sine is one of the fundamental functions of trigonometry
-    (the mathematical study of triangles).  Consider a circle of radius
-    1 centered on the origin.  A ray comes in from the :math:`+x` axis,
-    makes an angle at the origin (measured counter-clockwise from that
-    axis), and departs from the origin.  The :math:`y` coordinate of
-    the outgoing ray's intersection with the unit circle is the sine
-    of that angle.  It ranges from -1 for :math:`x=3\\pi / 2` to
-    +1 for :math:`\\pi / 2.`  The function has zeroes where the angle is
-    a multiple of :math:`\\pi`.  Sines of angles between :math:`\\pi` and
-    :math:`2\\pi` are negative.  The numerous properties of the sine and
-    related functions are included in any standard trigonometry text.
-
-    Examples
-    --------
-    Print sine of one angle:
-
-    >>> np.sin(np.pi/2.)
-    1.0
-
-    Print sines of an array of angles given in degrees:
-
-    >>> np.sin(np.array((0., 30., 45., 60., 90.)) * np.pi / 180. )
-    array([ 0.        ,  0.5       ,  0.70710678,  0.8660254 ,  1.        ])
-
-    Plot the sine function:
-
-    >>> import matplotlib.pylab as plt
-    >>> x = np.linspace(-np.pi, np.pi, 201)
-    >>> plt.plot(x, np.sin(x))
-    >>> plt.xlabel('Angle [rad]')
-    >>> plt.ylabel('sin(x)')
-    >>> plt.axis('tight')
-    >>> plt.show()
-
-    """)
-
-add_newdoc('numpy.core.umath', 'sinh',
-    """
-    Hyperbolic sine, element-wise.
-
-    Equivalent to ``1/2 * (np.exp(x) - np.exp(-x))`` or
-    ``-1j * np.sin(1j*x)``.
-
-    Parameters
-    ----------
-    x : array_like
-        Input array.
-
-    Returns
-    -------
-    out : ndarray
-        Output array of same shape as `x`.
-
-    """)
-
-add_newdoc('numpy.core.umath', 'sqrt',
-    """
-    Return the positive square-root of an array, element-wise.
-
-    Parameters
-    ----------
-    x : array_like
-        The square root of each element in this array is calculated.
-
-    Returns
-    -------
-    y : ndarray
-        An array of the same shape as `x`, containing the square-root of
-        each element in `x`.  If any element in `x`
-        is complex, a complex array is returned.  If all of the elements
-        of `x` are real, negative elements return numpy.nan elements.
-
-    See Also
-    --------
-    numpy.lib.scimath.sqrt
-        A version which returns complex numbers when given negative reals.
-
-    Notes
-    -----
-    `sqrt` has a branch cut ``[-inf, 0)`` and is continuous from above on it.
-
-    Examples
-    --------
-    >>> np.sqrt([1,4,9])
-    array([ 1.,  2.,  3.])
-
-    >>> np.sqrt([4, -1, -3+4J])
-    array([ 2.+0.j,  0.+1.j,  1.+2.j])
-
-    >>> np.sqrt([4, -1, numpy.inf])
-    array([  2.,  NaN,  Inf])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'square',
-    """
-    Return the element-wise square of the input.
-
-    Parameters
-    ----------
-    x : array_like
-        Input data.
-
-    Returns
-    -------
-    out : ndarray
-        Element-wise `x*x`, of the same shape and dtype as `x`.
-        Returns scalar if `x` is a scalar.
-
-    See Also
-    --------
-    numpy.linalg.matrix_power
-    sqrt
-    power
-
-    Examples
-    --------
-    >>> np.square([-1j, 1])
-    array([-1.-0.j,  1.+0.j])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'subtract',
-    """
-    Subtract arguments element-wise.
-
-    Parameters
-    ----------
-    x1, x2 : array_like
-        The arrays to be subtracted from each other.  If type is 'array_like'
-        the `x1` and `x2` shapes must be identical.
-
-    Returns
-    -------
-    y : ndarray
-        The difference of `x1` and `x2`, element-wise.  Returns a scalar if
-        both  `x1` and `x2` are scalars.
-
-    Notes
-    -----
-    Equivalent to `x1` - `x2` in terms of array-broadcasting.
-
-    Examples
-    --------
-    >>> np.subtract(1.0, 4.0)
-    -3.0
-
-    >>> x1 = np.arange(9.0).reshape((3, 3))
-    >>> x2 = np.arange(3.0)
-    >>> np.subtract(x1, x2)
-    array([[ 0.,  0.,  0.],
-           [ 3.,  3.,  3.],
-           [ 6.,  6.,  6.]])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'tan',
-    """
-    Compute tangent element-wise.
-
-    Parameters
-    ----------
-    x : array_like
-      Angles in radians.
-
-    Returns
-    -------
-    y : ndarray
-      The corresponding tangent values.
-
-
-    Examples
-    --------
-    >>> from math import pi
-    >>> np.tan(np.array([-pi,pi/2,pi]))
-    array([  1.22460635e-16,   1.63317787e+16,  -1.22460635e-16])
-
-    """)
-
-add_newdoc('numpy.core.umath', 'tanh',
-    """
-    Hyperbolic tangent element-wise.
-
-    Parameters
-    ----------
-    x : array_like
-        Input array.
-
-    Returns
-    -------
-    y : ndarray
-        The corresponding hyperbolic tangent values.
-
-    """)
-
-add_newdoc('numpy.core.umath', 'true_divide',
-    """
-    Returns an element-wise, true division of the inputs.
-
-    Instead of the Python traditional 'floor division', this returns a true
-    division.  True division adjusts the output type to present the best
-    answer, regardless of input types.
-
-    Parameters
-    ----------
-    x1 : array_like
-        Dividend
-    x2 : array_like
-        Divisor
-
-    Returns
-    -------
-    out : ndarray
-        Result is scalar if both inputs are scalar, ndarray otherwise.
-
-    Notes
-    -----
-    The floor division operator ('//') was added in Python 2.2 making '//'
-    and '/' equivalent operators.  The default floor division operation of
-    '/' can be replaced by true division with
-    'from __future__ import division'.
-
-    In Python 3.0, '//' will be the floor division operator and '/' will be
-    the true division operator.  The 'true_divide(`x1`, `x2`)' function is
-    equivalent to true division in Python.
-
-    """)

Modified: branches/dynamic_cpu_configuration/numpy/core/code_generators/generate_umath.py
===================================================================
--- branches/dynamic_cpu_configuration/numpy/core/code_generators/generate_umath.py	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/numpy/core/code_generators/generate_umath.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -1,7 +1,7 @@
 import re, textwrap
 import sys, os
 sys.path.insert(0, os.path.dirname(__file__))
-import docstrings
+import ufunc_docstrings as docstrings
 sys.path.pop(0)
 
 Zero = "PyUFunc_Zero"
@@ -330,12 +330,12 @@
           ),
 'logaddexp' :
     Ufunc(2, 1, None,
-          "",
+          docstrings.get('numpy.core.umath.logaddexp'),
           TD(flts, f="logaddexp")
           ),
 'logaddexp2' :
     Ufunc(2, 1, None,
-          "",
+          docstrings.get('numpy.core.umath.logaddexp2'),
           TD(flts, f="logaddexp2")
           ),
 'bitwise_and' :
@@ -381,7 +381,7 @@
           ),
 'rad2deg' :
     Ufunc(1, 1, None,
-          '',
+          docstrings.get('numpy.core.umath.rad2deg'),
           TD(fltsM, f='rad2deg'),
           ),
 'radians' :
@@ -391,7 +391,7 @@
           ),
 'deg2rad' :
     Ufunc(1, 1, None,
-          '',
+          docstrings.get('numpy.core.umath.deg2rad'),
           TD(fltsM, f='deg2rad'),
           ),
 'arccos' :
@@ -474,7 +474,7 @@
           ),
 'exp2' :
     Ufunc(1, 1, None,
-          '',
+          docstrings.get('numpy.core.umath.exp2'),
           TD(flts, f='exp2'),
           TD(M, f='exp2'),
           ),
@@ -492,7 +492,7 @@
           ),
 'log2' :
     Ufunc(1, 1, None,
-          '',
+          docstrings.get('numpy.core.umath.log2'),
           TD(flts, f='log2'),
           TD(M, f='log2'),
           ),
@@ -522,7 +522,7 @@
           ),
 'trunc' :
     Ufunc(1, 1, None,
-          '',
+          docstrings.get('numpy.core.umath.trunc'),
           TD(flts, f='trunc'),
           TD(M, f='trunc'),
           ),

Copied: branches/dynamic_cpu_configuration/numpy/core/code_generators/ufunc_docstrings.py (from rev 6149, trunk/numpy/core/code_generators/ufunc_docstrings.py)

Modified: branches/dynamic_cpu_configuration/numpy/core/src/umath_funcs_c99.inc.src
===================================================================
--- branches/dynamic_cpu_configuration/numpy/core/src/umath_funcs_c99.inc.src	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/numpy/core/src/umath_funcs_c99.inc.src	2008-12-22 13:23:03 UTC (rev 6188)
@@ -17,9 +17,7 @@
  *    can be linked from the math library. The result can depend on the
  *    optimization flags as well as the compiler, so can't be known ahead of
  *    time. If the function can't be linked, then either it is absent, defined
- *    as a macro, or is an intrinsic (hardware) function. If it is linkable it
- *    may still be the case that no prototype is available. So to cover all the
- *    cases requires the following construction.
+ *    as a macro, or is an intrinsic (hardware) function.
  *
  *    i) Undefine any possible macros:
  *
@@ -27,44 +25,20 @@
  *    #undef foo
  *    #endif
  *
- *    ii) Check if the function was in the library, If not, define the
- *    function with npy_ prepended to its name to avoid conflict with any
- *    intrinsic versions, then use a define so that the preprocessor will
- *    replace foo with npy_foo before the compilation pass. Make the
- *    function static to avoid poluting the module library.
+ *    ii) Avoid as much as possible to declare any function here. Declaring
+ *    functions is not portable: some platforms define some function inline
+ *    with a non standard identifier, for example, or may put another
+ *    idendifier which changes the calling convention of the function. If you
+ *    really have to, ALWAYS declare it for the one platform you are dealing
+ *    with:
  *
- *    #ifdef foo
- *    #undef foo
- *    #endif
- *    #ifndef HAVE_FOO
- *    static double
- *    npy_foo(double x)
- *    {
- *        return x;
- *    }
- *    #define foo npy_foo
+ *    Not ok:
+ *        double exp(double a);
  *
- *    iii) Finally, even if foo is in the library, add a prototype. Just being
- *    in the library doesn't guarantee a prototype in math.h, and in any case
- *    you want to make sure the prototype is what you think it is. Count on it,
- *    whatever can go wrong will go wrong. Think defensively! The result:
- *
- *    #ifdef foo
- *    #undef foo
- *    #endif
- *    #ifndef HAVE_FOO
- *    static double
- *    npy_foo(double x)
- *    {
- *        return x;
- *    }
- *    #define foo npy_foo
- *    #else
- *    double foo(double x);
- *    #end
- *
- *    And there you have it.
- *
+ *    Ok:
+ *        #ifdef SYMBOL_DEFINED_WEIRD_PLATFORM
+ *        double exp(double);
+ *        #endif 
  */
 
 /*
@@ -82,8 +56,7 @@
 
 /* Original code by Konrad Hinsen.  */
 #ifndef HAVE_EXPM1
-static double
-npy_expm1(double x)
+double expm1(double x)
 {
     double u = exp(x);
     if (u == 1.0) {
@@ -94,14 +67,10 @@
         return (u-1.0) * x/log(u);
     }
 }
-#define expm1 npy_expm1
-#else
-double expm1(double x);
 #endif
 
 #ifndef HAVE_LOG1P
-static double
-npy_log1p(double x)
+double log1p(double x)
 {
     double u = 1. + x;
     if (u == 1.0) {
@@ -110,14 +79,10 @@
         return log(u) * x / (u - 1);
     }
 }
-#define log1p npy_log1p
-#else
-double log1p(double x);
 #endif
 
 #ifndef HAVE_HYPOT
-static double
-npy_hypot(double x, double y)
+double hypot(double x, double y)
 {
     double yx;
 
@@ -135,25 +100,17 @@
         return x*sqrt(1.+yx*yx);
     }
 }
-#define hypot npy_hypot
-#else
-double hypot(double x, double y);
 #endif
 
 #ifndef HAVE_ACOSH
-static double
-npy_acosh(double x)
+double acosh(double x)
 {
     return 2*log(sqrt((x+1.0)/2)+sqrt((x-1.0)/2));
 }
-#define acosh npy_acosh
-#else
-double acosh(double x);
 #endif
 
 #ifndef HAVE_ASINH
-static double
-npy_asinh(double xx)
+double asinh(double xx)
 {
     double x, d;
     int sign;
@@ -172,25 +129,22 @@
     }
     return sign*log1p(x*(1.0 + x/(d+1)));
 }
-#define asinh npy_asinh
-#else
-double asinh(double xx);
 #endif
 
 #ifndef HAVE_ATANH
-static double
-npy_atanh(double x)
+double atanh(double x)
 {
-    return 0.5*log1p(2.0*x/(1.0-x));
+    if (x > 0) {
+        return -0.5*log1p(-2.0*x/(1.0 + x));
+    }
+    else {
+        return 0.5*log1p(2.0*x/(1.0 - x));
+    }
 }
-#define atanh npy_atanh
-#else
-double atanh(double x);
 #endif
 
 #ifndef HAVE_RINT
-static double
-npy_rint(double x)
+double rint(double x)
 {
     double y, r;
 
@@ -209,46 +163,31 @@
     }
     return y;
 }
-#define rint npy_rint
-#else
-double rint(double x);
 #endif
 
 #ifndef HAVE_TRUNC
-static double
-npy_trunc(double x)
+double trunc(double x)
 {
     return x < 0 ? ceil(x) : floor(x);
 }
-#define trunc npy_trunc
-#else
-double trunc(double x);
 #endif
 
 #ifndef HAVE_EXP2
 #define LOG2 0.69314718055994530943
-static double
-npy_exp2(double x)
+double exp2(double x)
 {
     return exp(LOG2*x);
 }
-#define exp2 npy_exp2
 #undef LOG2
-#else
-double exp2(double x);
 #endif
 
 #ifndef HAVE_LOG2
 #define INVLOG2 1.4426950408889634074
-static double
-npy_log2(double x)
+double log2(double x)
 {
     return INVLOG2*log(x);
 }
-#define log2 npy_log2
 #undef INVLOG2
-#else
-double log2(double x);
 #endif
 
 /*
@@ -326,14 +265,10 @@
 #undef @kind@@c@
 #endif
 #ifndef HAVE_ at KIND@@C@
-static @type@
-npy_ at kind@@c@(@type@ x)
+ at type@ @kind@@c@(@type@ x)
 {
     return (@type@) @kind@((double)x);
 }
-#define @kind@@c@  npy_ at kind@@c@
-#else
- at type@ @kind@@c@(@type@ x);
 #endif
 
 /**end repeat1**/
@@ -346,14 +281,10 @@
 #undef @kind@@c@
 #endif
 #ifndef HAVE_ at KIND@@C@
-static @type@
-npy_ at kind@@c@(@type@ x, @type@ y)
+ at type@ @kind@@c@(@type@ x, @type@ y)
 {
     return (@type@) @kind@((double)x, (double) y);
 }
-#define @kind@@c@  npy_ at kind@@c@
-#else
- at type@ @kind@@c@(@type@ x, @type@ y);
 #endif
 /**end repeat1**/
 
@@ -361,17 +292,13 @@
 #undef modf at c@
 #endif
 #ifndef HAVE_MODF at C@
-static @type@
-npy_modf at c@(@type@ x, @type@ *iptr)
+ at type@ modf at c@(@type@ x, @type@ *iptr)
 {
     double niptr;
     double y = modf((double)x, &niptr);
     *iptr = (@type@) niptr;
     return (@type@) y;
 }
-#define modf at c@ npy_modf at c@
-#else
- at type@ modf at c@(@type@ x, @type@ *iptr);
 #endif
 
 /**end repeat**/

Modified: branches/dynamic_cpu_configuration/numpy/core/tests/test_umath.py
===================================================================
--- branches/dynamic_cpu_configuration/numpy/core/tests/test_umath.py	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/numpy/core/tests/test_umath.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -404,7 +404,7 @@
         if sys.version_info < (2,5,3):
             broken_cmath_asinh = True
 
-        points = [-2, 2j, 2, -2j, -1-1j, -1+1j, +1-1j, +1+1j]
+        points = [-1-1j, -1+1j, +1-1j, +1+1j]
         name_map = {'arcsin': 'asin', 'arccos': 'acos', 'arctan': 'atan',
                     'arcsinh': 'asinh', 'arccosh': 'acosh', 'arctanh': 'atanh'}
         atol = 4*np.finfo(np.complex).eps

Modified: branches/dynamic_cpu_configuration/numpy/distutils/fcompiler/gnu.py
===================================================================
--- branches/dynamic_cpu_configuration/numpy/distutils/fcompiler/gnu.py	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/numpy/distutils/fcompiler/gnu.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -10,6 +10,7 @@
 
 compilers = ['GnuFCompiler', 'Gnu95FCompiler']
 
+TARGET_R = re.compile("Target: ([a-zA-Z0-9_\-]*)")
 class GnuFCompiler(FCompiler):
     compiler_type = 'gnu'
     compiler_aliases = ('g77',)
@@ -130,10 +131,10 @@
                 # if windows and not cygwin, libg2c lies in a different folder
                 if sys.platform == 'win32' and not d.startswith('/usr/lib'):
                     d = os.path.normpath(d)
-                    if not os.path.exists(os.path.join(d, 'libg2c.a')):
+                    if not os.path.exists(os.path.join(d, "lib%s.a" % self.g2c)):
                         d2 = os.path.abspath(os.path.join(d,
                                                           '../../../../lib'))
-                        if os.path.exists(os.path.join(d2, 'libg2c.a')):
+                        if os.path.exists(os.path.join(d2, "lib%s.a" % self.g2c)):
                             opt.append(d2)
                 opt.append(d)
         return opt
@@ -269,12 +270,44 @@
         flags = GnuFCompiler.get_flags_linker_so(self)
         return self._add_arches_for_universal_build(flags)
 
+    def get_library_dirs(self):
+        opt = GnuFCompiler.get_library_dirs(self)
+	if sys.platform == 'win32':
+	    c_compiler = self.c_compiler
+	    if c_compiler and c_compiler.compiler_type == "msvc":
+		target = self.get_target()
+		if target:
+                    d = os.path.normpath(self.get_libgcc_dir())
+		    root = os.path.join(d, os.pardir, os.pardir, os.pardir, os.pardir)
+		    mingwdir = os.path.normpath(os.path.join(root, target, "lib"))
+		    full = os.path.join(mingwdir, "libmingwex.a")
+		    if os.path.exists(full):
+			opt.append(mingwdir)
+	return opt
+
     def get_libraries(self):
         opt = GnuFCompiler.get_libraries(self)
         if sys.platform == 'darwin':
             opt.remove('cc_dynamic')
+	if sys.platform == 'win32':
+	    c_compiler = self.c_compiler
+	    if c_compiler and c_compiler.compiler_type == "msvc":
+		if "gcc" in opt:
+		    i = opt.index("gcc")
+		    opt.insert(i+1, "mingwex")
+		    opt.insert(i+1, "mingw32")
         return opt
 
+    def get_target(self):
+        status, output = exec_command(self.compiler_f77 +
+                                      ['-v'],
+                                      use_tee=0)
+        if not status:
+	    m = TARGET_R.search(output)
+	    if m:
+	        return m.group(1)	
+        return ""
+
 if __name__ == '__main__':
     from distutils import log
     log.set_verbosity(2)

Modified: branches/dynamic_cpu_configuration/numpy/f2py/crackfortran.py
===================================================================
--- branches/dynamic_cpu_configuration/numpy/f2py/crackfortran.py	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/numpy/f2py/crackfortran.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -2446,9 +2446,9 @@
     global skipfuncs, onlyfuncs
     setmesstext(block)
     ret=''
-    if type(block) is type([]):
+    if isinstance(block, list):
         for g in block:
-            if g['block'] in ['function','subroutine']:
+            if g and g['block'] in ['function','subroutine']:
                 if g['name'] in skipfuncs:
                     continue
                 if onlyfuncs and g['name'] not in onlyfuncs:

Modified: branches/dynamic_cpu_configuration/numpy/lib/format.py
===================================================================
--- branches/dynamic_cpu_configuration/numpy/lib/format.py	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/numpy/lib/format.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -381,7 +381,7 @@
 
 
 def open_memmap(filename, mode='r+', dtype=None, shape=None,
-    fortran_order=False, version=(1,0)):
+                fortran_order=False, version=(1,0)):
     """
     Open a .npy file as a memory-mapped array.
 
@@ -390,7 +390,7 @@
     Parameters
     ----------
     filename : str
-        The name of the file on disk. This may not be a filelike object.
+        The name of the file on disk. This may not be a file-like object.
     mode : str, optional
         The mode to open the file with. In addition to the standard file modes,
         'c' is also accepted to mean "copy on write". See `numpy.memmap` for
@@ -425,6 +425,10 @@
     numpy.memmap
 
     """
+    if not isinstance(filename, basestring):
+        raise ValueError("Filename must be a string.  Memmap cannot use" \
+                         " existing file handles.")
+
     if 'w' in mode:
         # We are creating the file, not reading it.
         # Check if we ought to create the file.

Modified: branches/dynamic_cpu_configuration/numpy/lib/io.py
===================================================================
--- branches/dynamic_cpu_configuration/numpy/lib/io.py	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/numpy/lib/io.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -79,7 +79,7 @@
         else:
             raise KeyError, "%s is not a file in the archive" % key
 
-def load(file, memmap=False):
+def load(file, mmap_mode=None):
     """
     Load a pickled, ``.npy``, or ``.npz`` binary file.
 
@@ -87,10 +87,15 @@
     ----------
     file : file-like object or string
         The file to read.  It must support ``seek()`` and ``read()`` methods.
-    memmap : bool
-        If True, then memory-map the ``.npy`` file (or unzip the ``.npz`` file
-        into a temporary directory and memory-map each component).  This has
-        no effect for a pickled file.
+    mmap_mode: {None, 'r+', 'r', 'w+', 'c'}, optional
+        If not None, then memory-map the file, using the given mode
+        (see `numpy.memmap`).  The mode has no effect for pickled or
+        zipped files.
+        A memory-mapped array is stored on disk, and not directly loaded
+        into memory.  However, it can be accessed and sliced like any
+        ndarray.  Memory mapping is especially useful for accessing
+        small fragments of large files without reading the entire file
+        into memory.
 
     Returns
     -------
@@ -104,28 +109,35 @@
 
     Notes
     -----
-    - If file contains pickle data, then whatever is stored in the
+    - If the file contains pickle data, then whatever is stored in the
       pickle is returned.
     - If the file is a ``.npy`` file, then an array is returned.
     - If the file is a ``.npz`` file, then a dictionary-like object is
-      returned, containing {filename: array} key-value pairs, one for
-      every file in the archive.
+      returned, containing ``{filename: array}`` key-value pairs, one for
+      each file in the archive.
 
     Examples
     --------
-    >>> np.save('/tmp/123', np.array([1, 2, 3])
+    Store data to disk, and load it again:
+
+    >>> np.save('/tmp/123', np.array([[1, 2, 3], [4, 5, 6]]))
     >>> np.load('/tmp/123.npy')
-    array([1, 2, 3])
+    array([[1, 2, 3],
+           [4, 5, 6]])
 
+    Mem-map the stored array, and then access the second row
+    directly from disk:
+
+    >>> X = np.load('/tmp/123.npy', mmap_mode='r')
+    >>> X[1, :]
+    memmap([4, 5, 6])
+
     """
     if isinstance(file, basestring):
         fid = _file(file,"rb")
     else:
         fid = file
 
-    if memmap:
-        raise NotImplementedError
-
     # Code to distinguish from NumPy binary files and pickles.
     _ZIP_PREFIX = 'PK\x03\x04'
     N = len(format.MAGIC_PREFIX)
@@ -134,7 +146,10 @@
     if magic.startswith(_ZIP_PREFIX):  # zip-file (assume .npz)
         return NpzFile(fid)
     elif magic == format.MAGIC_PREFIX: # .npy file
-        return format.read_array(fid)
+        if mmap_mode:
+            return format.open_memmap(file, mode=mmap_mode)
+        else:
+            return format.read_array(fid)
     else:  # Try a pickle
         try:
             return _cload(fid)
@@ -264,8 +279,8 @@
     Parameters
     ----------
     fname : file or string
-        File or filename to read.  If the filename extension is ``.gz``,
-        the file is first decompressed.
+        File or filename to read.  If the filename extension is ``.gz`` or
+        ``.bz2``, the file is first decompressed.
     dtype : data-type
         Data type of the resulting array.  If this is a record data-type,
         the resulting array will be 1-dimensional, and each row will be
@@ -331,9 +346,12 @@
         if fname.endswith('.gz'):
             import gzip
             fh = gzip.open(fname)
+        elif fname.endswith('.bz2'):
+            import bz2
+            fh = bz2.BZ2File(fname)
         else:
             fh = file(fname)
-    elif hasattr(fname, 'seek'):
+    elif hasattr(fname, 'readline'):
         fh = fname
     else:
         raise ValueError('fname must be a string or file handle')

Modified: branches/dynamic_cpu_configuration/numpy/lib/polynomial.py
===================================================================
--- branches/dynamic_cpu_configuration/numpy/lib/polynomial.py	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/numpy/lib/polynomial.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -15,36 +15,13 @@
 from numpy.lib.twodim_base import diag, vander
 from numpy.lib.shape_base import hstack, atleast_1d
 from numpy.lib.function_base import trim_zeros, sort_complex
-eigvals = None
-lstsq = None
+from numpy.linalg import eigvals, lstsq
 
 class RankWarning(UserWarning):
     """Issued by polyfit when Vandermonde matrix is rank deficient.
     """
     pass
 
-def get_linalg_funcs():
-    "Look for linear algebra functions in numpy"
-    global eigvals, lstsq
-    from numpy.dual import eigvals, lstsq
-    return
-
-def _eigvals(arg):
-    "Return the eigenvalues of the argument"
-    try:
-        return eigvals(arg)
-    except TypeError:
-        get_linalg_funcs()
-        return eigvals(arg)
-
-def _lstsq(X, y, rcond):
-    "Do least squares on the arguments"
-    try:
-        return lstsq(X, y, rcond)
-    except TypeError:
-        get_linalg_funcs()
-        return lstsq(X, y, rcond)
-
 def poly(seq_of_zeros):
     """
     Return polynomial coefficients given a sequence of roots.
@@ -94,7 +71,7 @@
     seq_of_zeros = atleast_1d(seq_of_zeros)
     sh = seq_of_zeros.shape
     if len(sh) == 2 and sh[0] == sh[1]:
-        seq_of_zeros = _eigvals(seq_of_zeros)
+        seq_of_zeros = eigvals(seq_of_zeros)
     elif len(sh) ==1:
         pass
     else:
@@ -177,7 +154,7 @@
         # build companion matrix and find its eigenvalues (the roots)
         A = diag(NX.ones((N-2,), p.dtype), -1)
         A[0, :] = -p[1:] / p[0]
-        roots = _eigvals(A)
+        roots = eigvals(A)
     else:
         roots = NX.array([])
 
@@ -500,7 +477,7 @@
 
     # solve least squares equation for powers of x
     v = vander(x, order)
-    c, resids, rank, s = _lstsq(v, y, rcond)
+    c, resids, rank, s = lstsq(v, y, rcond)
 
     # warn on rank reduction, which indicates an ill conditioned matrix
     if rank != order and not full:

Modified: branches/dynamic_cpu_configuration/numpy/lib/tests/test_io.py
===================================================================
--- branches/dynamic_cpu_configuration/numpy/lib/tests/test_io.py	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/numpy/lib/tests/test_io.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -2,55 +2,90 @@
 import numpy as np
 import StringIO
 
+from tempfile import NamedTemporaryFile
 
 class RoundtripTest:
+    def roundtrip(self, save_func, *args, **kwargs):
+        """
+        save_func : callable
+            Function used to save arrays to file.
+        file_on_disk : bool
+            If true, store the file on disk, instead of in a
+            string buffer.
+        save_kwds : dict
+            Parameters passed to `save_func`.
+        load_kwds : dict
+            Parameters passed to `numpy.load`.
+        args : tuple of arrays
+            Arrays stored to file.
+
+        """
+        save_kwds = kwargs.get('save_kwds', {})
+        load_kwds = kwargs.get('load_kwds', {})
+        file_on_disk = kwargs.get('file_on_disk', False)
+
+        if file_on_disk:
+            target_file = NamedTemporaryFile()
+            load_file = target_file.name
+        else:
+            target_file = StringIO.StringIO()
+            load_file = target_file
+
+        arr = args
+
+        save_func(target_file, *arr, **save_kwds)
+        target_file.flush()
+        target_file.seek(0)
+
+        arr_reloaded = np.load(load_file, **load_kwds)
+
+        self.arr = arr
+        self.arr_reloaded = arr_reloaded
+
     def test_array(self):
-        a = np.array( [[1,2],[3,4]], float)
-        self.do(a)
+        a = np.array([[1, 2], [3, 4]], float)
+        self.roundtrip(a)
 
-        a = np.array( [[1,2],[3,4]], int)
-        self.do(a)
+        a = np.array([[1, 2], [3, 4]], int)
+        self.roundtrip(a)
 
-        a = np.array( [[1+5j,2+6j],[3+7j,4+8j]], dtype=np.csingle)
-        self.do(a)
+        a = np.array([[1 + 5j, 2 + 6j], [3 + 7j, 4 + 8j]], dtype=np.csingle)
+        self.roundtrip(a)
 
-        a = np.array( [[1+5j,2+6j],[3+7j,4+8j]], dtype=np.cdouble)
-        self.do(a)
+        a = np.array([[1 + 5j, 2 + 6j], [3 + 7j, 4 + 8j]], dtype=np.cdouble)
+        self.roundtrip(a)
 
     def test_1D(self):
-        a = np.array([1,2,3,4], int)
-        self.do(a)
+        a = np.array([1, 2, 3, 4], int)
+        self.roundtrip(a)
 
+    def test_mmap(self):
+        a = np.array([[1, 2.5], [4, 7.3]])
+        self.roundtrip(a, file_on_disk=True, load_kwds={'mmap_mode': 'r'})
+
     def test_record(self):
         a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')])
-        self.do(a)
+        self.roundtrip(a)
 
 class TestSaveLoad(RoundtripTest, TestCase):
-    def do(self, a):
-        c = StringIO.StringIO()
-        np.save(c, a)
-        c.seek(0)
-        a_reloaded = np.load(c)
-        assert_equal(a, a_reloaded)
+    def roundtrip(self, *args, **kwargs):
+        RoundtripTest.roundtrip(self, np.save, *args, **kwargs)
+        assert_equal(self.arr[0], self.arr_reloaded)
 
-
 class TestSavezLoad(RoundtripTest, TestCase):
-    def do(self, *arrays):
-        c = StringIO.StringIO()
-        np.savez(c, *arrays)
-        c.seek(0)
-        l = np.load(c)
-        for n, a in enumerate(arrays):
-            assert_equal(a, l['arr_%d' % n])
+    def roundtrip(self, *args, **kwargs):
+        RoundtripTest.roundtrip(self, np.savez, *args, **kwargs)
+        for n, arr in enumerate(self.arr):
+            assert_equal(arr, self.arr_reloaded['arr_%d' % n])
 
     def test_multiple_arrays(self):
-        a = np.array( [[1,2],[3,4]], float)
-        b = np.array( [[1+2j,2+7j],[3-6j,4+12j]], complex)
-        self.do(a,b)
+        a = np.array([[1, 2], [3, 4]], float)
+        b = np.array([[1 + 2j, 2 + 7j], [3 - 6j, 4 + 12j]], complex)
+        self.roundtrip(a,b)
 
     def test_named_arrays(self):
-        a = np.array( [[1,2],[3,4]], float)
-        b = np.array( [[1+2j,2+7j],[3-6j,4+12j]], complex)
+        a = np.array([[1, 2], [3, 4]], float)
+        b = np.array([[1 + 2j, 2 + 7j], [3 - 6j, 4 + 12j]], complex)
         c = StringIO.StringIO()
         np.savez(c, file_a=a, file_b=b)
         c.seek(0)
@@ -61,7 +96,7 @@
 
 class TestSaveTxt(TestCase):
     def test_array(self):
-        a =np.array( [[1,2],[3,4]], float)
+        a =np.array([[1, 2], [3, 4]], float)
         c = StringIO.StringIO()
         np.savetxt(c, a)
         c.seek(0)
@@ -69,14 +104,14 @@
                ['1.000000000000000000e+00 2.000000000000000000e+00\n',
                 '3.000000000000000000e+00 4.000000000000000000e+00\n'])
 
-        a =np.array( [[1,2],[3,4]], int)
+        a =np.array([[1, 2], [3, 4]], int)
         c = StringIO.StringIO()
         np.savetxt(c, a, fmt='%d')
         c.seek(0)
         assert_equal(c.readlines(), ['1 2\n', '3 4\n'])
 
     def test_1D(self):
-        a = np.array([1,2,3,4], int)
+        a = np.array([1, 2, 3, 4], int)
         c = StringIO.StringIO()
         np.savetxt(c, a, fmt='%d')
         c.seek(0)
@@ -146,12 +181,12 @@
 
         c.seek(0)
         x = np.loadtxt(c, dtype=int)
-        a = np.array([[1,2],[3,4]], int)
+        a = np.array([[1, 2], [3, 4]], int)
         assert_array_equal(x, a)
 
         c.seek(0)
         x = np.loadtxt(c, dtype=float)
-        a = np.array([[1,2],[3,4]], float)
+        a = np.array([[1, 2], [3, 4]], float)
         assert_array_equal(x, a)
 
     def test_1D(self):
@@ -159,14 +194,14 @@
         c.write('1\n2\n3\n4\n')
         c.seek(0)
         x = np.loadtxt(c, dtype=int)
-        a = np.array([1,2,3,4], int)
+        a = np.array([1, 2, 3, 4], int)
         assert_array_equal(x, a)
 
         c = StringIO.StringIO()
         c.write('1,2,3,4\n')
         c.seek(0)
         x = np.loadtxt(c, dtype=int, delimiter=',')
-        a = np.array([1,2,3,4], int)
+        a = np.array([1, 2, 3, 4], int)
         assert_array_equal(x, a)
 
     def test_missing(self):
@@ -175,7 +210,7 @@
         c.seek(0)
         x = np.loadtxt(c, dtype=int, delimiter=',', \
             converters={3:lambda s: int(s or -999)})
-        a = np.array([1,2,3,-999,5], int)
+        a = np.array([1, 2, 3, -999, 5], int)
         assert_array_equal(x, a)
 
     def test_converters_with_usecols(self):
@@ -184,8 +219,8 @@
         c.seek(0)
         x = np.loadtxt(c, dtype=int, delimiter=',', \
             converters={3:lambda s: int(s or -999)}, \
-            usecols=(1, 3, ))
-        a = np.array([[2,  -999],[7, 9]], int)
+            usecols=(1, 3,))
+        a = np.array([[2, -999], [7, 9]], int)
         assert_array_equal(x, a)
 
     def test_comments(self):
@@ -194,7 +229,7 @@
         c.seek(0)
         x = np.loadtxt(c, dtype=int, delimiter=',', \
             comments='#')
-        a = np.array([1,2,3,5], int)
+        a = np.array([1, 2, 3, 5], int)
         assert_array_equal(x, a)
 
     def test_skiprows(self):
@@ -203,7 +238,7 @@
         c.seek(0)
         x = np.loadtxt(c, dtype=int, delimiter=',', \
             skiprows=1)
-        a = np.array([1,2,3,5], int)
+        a = np.array([1, 2, 3, 5], int)
         assert_array_equal(x, a)
 
         c = StringIO.StringIO()
@@ -211,28 +246,28 @@
         c.seek(0)
         x = np.loadtxt(c, dtype=int, delimiter=',', \
             skiprows=1)
-        a = np.array([1,2,3,5], int)
+        a = np.array([1, 2, 3, 5], int)
         assert_array_equal(x, a)
 
     def test_usecols(self):
-        a =np.array( [[1,2],[3,4]], float)
+        a = np.array([[1, 2], [3, 4]], float)
         c = StringIO.StringIO()
         np.savetxt(c, a)
         c.seek(0)
         x = np.loadtxt(c, dtype=float, usecols=(1,))
         assert_array_equal(x, a[:,1])
 
-        a =np.array( [[1,2,3],[3,4,5]], float)
+        a =np.array([[1, 2, 3], [3, 4, 5]], float)
         c = StringIO.StringIO()
         np.savetxt(c, a)
         c.seek(0)
-        x = np.loadtxt(c, dtype=float, usecols=(1,2))
-        assert_array_equal(x, a[:,1:])
+        x = np.loadtxt(c, dtype=float, usecols=(1, 2))
+        assert_array_equal(x, a[:, 1:])
 
         # Testing with arrays instead of tuples.
         c.seek(0)
-        x = np.loadtxt(c, dtype=float, usecols=np.array([1,2]))
-        assert_array_equal(x, a[:,1:])
+        x = np.loadtxt(c, dtype=float, usecols=np.array([1, 2]))
+        assert_array_equal(x, a[:, 1:])
 
         # Checking with dtypes defined converters.
         data = '''JOE 70.1 25.3
@@ -241,9 +276,9 @@
         c = StringIO.StringIO(data)
         names = ['stid', 'temp']
         dtypes = ['S4', 'f8']
-        arr = np.loadtxt(c, usecols=(0,2),dtype=zip(names,dtypes))
-        assert_equal(arr['stid'],  ["JOE",  "BOB"])
-        assert_equal(arr['temp'],  [25.3,  27.9])
+        arr = np.loadtxt(c, usecols=(0, 2), dtype=zip(names, dtypes))
+        assert_equal(arr['stid'], ["JOE",  "BOB"])
+        assert_equal(arr['temp'], [25.3,  27.9])
 
     def test_fancy_dtype(self):
         c = StringIO.StringIO()
@@ -251,7 +286,7 @@
         c.seek(0)
         dt = np.dtype([('x', int), ('y', [('t', int), ('s', float)])])
         x = np.loadtxt(c, dtype=dt, delimiter=',')
-        a = np.array([(1,(2,3.0)),(4,(5,6.0))], dt)
+        a = np.array([(1, (2, 3.0)), (4, (5, 6.0))], dt)
         assert_array_equal(x, a)
 
     def test_empty_file(self):
@@ -262,11 +297,13 @@
         c = StringIO.StringIO()
         c.writelines(['1 21\n', '3 42\n'])
         c.seek(0)
-        data = np.loadtxt(c, usecols=(1,), converters={0: lambda s: int(s, 16)})
+        data = np.loadtxt(c, usecols=(1,),
+                          converters={0: lambda s: int(s, 16)})
         assert_array_equal(data, [21, 42])
 
         c.seek(0)
-        data = np.loadtxt(c, usecols=(1,), converters={1: lambda s: int(s, 16)})
+        data = np.loadtxt(c, usecols=(1,),
+                          converters={1: lambda s: int(s, 16)})
         assert_array_equal(data, [33, 66])
 
 class Testfromregex(TestCase):
@@ -277,7 +314,8 @@
 
         dt = [('num', np.float64), ('val', 'S3')]
         x = np.fromregex(c, r"([0-9.]+)\s+(...)", dt)
-        a = np.array([(1.312, 'foo'), (1.534, 'bar'), (4.444, 'qux')], dtype=dt)
+        a = np.array([(1.312, 'foo'), (1.534, 'bar'), (4.444, 'qux')],
+                     dtype=dt)
         assert_array_equal(x, a)
 
     def test_record_2(self):
@@ -288,7 +326,8 @@
 
         dt = [('num', np.int32), ('val', 'S3')]
         x = np.fromregex(c, r"(\d+)\s+(...)", dt)
-        a = np.array([(1312, 'foo'), (1534, 'bar'), (4444, 'qux')], dtype=dt)
+        a = np.array([(1312, 'foo'), (1534, 'bar'), (4444, 'qux')],
+                     dtype=dt)
         assert_array_equal(x, a)
 
     def test_record_3(self):

Modified: branches/dynamic_cpu_configuration/numpy/ma/core.py
===================================================================
--- branches/dynamic_cpu_configuration/numpy/ma/core.py	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/numpy/ma/core.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -19,7 +19,7 @@
 __author__ = "Pierre GF Gerard-Marchant"
 __docformat__ = "restructuredtext en"
 
-__all__ = ['MAError', 'MaskType', 'MaskedArray',
+__all__ = ['MAError', 'MaskError', 'MaskType', 'MaskedArray',
            'bool_',
            'abs', 'absolute', 'add', 'all', 'allclose', 'allequal', 'alltrue',
            'amax', 'amin', 'anom', 'anomalies', 'any', 'arange',
@@ -28,13 +28,14 @@
            'array', 'asarray','asanyarray',
            'bitwise_and', 'bitwise_or', 'bitwise_xor',
            'ceil', 'choose', 'clip', 'common_fill_value', 'compress',
-           'compressed', 'concatenate', 'conjugate', 'cos', 'cosh', 'count',
-           'default_fill_value', 'diagonal', 'divide', 'dump', 'dumps',
-           'empty', 'empty_like', 'equal', 'exp',
-           'fabs', 'fmod', 'filled', 'floor', 'floor_divide','fix_invalid',
-           'frombuffer', 'fromfunction',
+           'compressed', 'concatenate', 'conjugate', 'copy', 'cos', 'cosh',
+           'count', 'cumprod', 'cumsum',
+           'default_fill_value', 'diag', 'diagonal', 'divide', 'dump', 'dumps',
+           'empty', 'empty_like', 'equal', 'exp', 'expand_dims',
+           'fabs', 'flatten_mask', 'fmod', 'filled', 'floor', 'floor_divide',
+           'fix_invalid', 'frombuffer', 'fromfunction',
            'getdata','getmask', 'getmaskarray', 'greater', 'greater_equal',
-           'hypot',
+           'harden_mask', 'hypot',
            'identity', 'ids', 'indices', 'inner', 'innerproduct',
            'isMA', 'isMaskedArray', 'is_mask', 'is_masked', 'isarray',
            'left_shift', 'less', 'less_equal', 'load', 'loads', 'log', 'log10',
@@ -49,12 +50,13 @@
            'mod', 'multiply',
            'negative', 'nomask', 'nonzero', 'not_equal',
            'ones', 'outer', 'outerproduct',
-           'power', 'product', 'ptp', 'put', 'putmask',
+           'power', 'prod', 'product', 'ptp', 'put', 'putmask',
            'rank', 'ravel', 'remainder', 'repeat', 'reshape', 'resize',
-           'right_shift', 'round_',
-           'set_fill_value', 'shape', 'sin', 'sinh', 'size', 'sometrue', 'sort',
-           'sqrt', 'std', 'subtract', 'sum', 'swapaxes',
-           'take', 'tan', 'tanh', 'transpose', 'true_divide',
+           'right_shift', 'round_', 'round',
+           'set_fill_value', 'shape', 'sin', 'sinh', 'size', 'sometrue',
+           'sort', 'soften_mask', 'sqrt', 'squeeze', 'std', 'subtract', 'sum', 
+           'swapaxes',
+           'take', 'tan', 'tanh', 'trace', 'transpose', 'true_divide',
            'var', 'where',
            'zeros']
 
@@ -95,12 +97,31 @@
     """
     return newdoc % (initialdoc, note)
 
+def get_object_signature(obj):
+    """
+    Get the signature from obj
+    """
+    import inspect
+    try:
+        sig = inspect.formatargspec(*inspect.getargspec(obj))
+    except TypeError, errmsg:
+        msg = "Unable to retrieve the signature of %s '%s'\n"\
+              "(Initial error message: %s)"
+#        warnings.warn(msg % (type(obj),
+#                             getattr(obj, '__name__', '???'),
+#                             errmsg))
+        sig = ''
+    return sig
+
 #####--------------------------------------------------------------------------
 #---- --- Exceptions ---
 #####--------------------------------------------------------------------------
 class MAError(Exception):
     "Class for MA related errors."
     pass
+class MaskError(MAError):
+    "Class for mask related errors."
+    pass
 
 
 #####--------------------------------------------------------------------------
@@ -514,17 +535,20 @@
             # ... but np.putmask looks more efficient, despite the copy.
             np.putmask(d1, dm, self.fill)
         # Take care of the masked singletong first ...
-        if not m.ndim and m:
+        if (not m.ndim) and m:
             return masked
-        # Get the result class .......................
-        if isinstance(a, MaskedArray):
-            subtype = type(a)
+        elif m is nomask:
+            result = self.f(d1, *args, **kwargs)
         else:
-            subtype = MaskedArray
-        # Get the result  as a view of the subtype ...
-        result = self.f(d1, *args, **kwargs).view(subtype)
-        # Fix the mask if we don't have a scalar
-        if result.ndim > 0:
+            result = np.where(m, d1, self.f(d1, *args, **kwargs))
+        # If result is not a scalar
+        if result.ndim:
+            # Get the result subclass:
+            if isinstance(a, MaskedArray):
+                subtype = type(a)
+            else:
+                subtype = MaskedArray
+            result = result.view(subtype)
             result._mask = m
             result._update_from(a)
         return result
@@ -563,19 +587,45 @@
     def __call__ (self, a, b, *args, **kwargs):
         "Execute the call behavior."
         m = mask_or(getmask(a), getmask(b))
-        (d1, d2) = (get_data(a), get_data(b))
-        result = self.f(d1, d2, *args, **kwargs).view(get_masked_subclass(a, b))
-        if len(result.shape):
-            if m is not nomask:
-                result._mask = make_mask_none(result.shape)
-                result._mask.flat = m
+        (da, db) = (getdata(a), getdata(b))
+        # Easy case: there's no mask...
+        if m is nomask:
+            result = self.f(da, db, *args, **kwargs)
+        # There are some masked elements: run only on the unmasked
+        else:
+            result = np.where(m, da, self.f(da, db, *args, **kwargs))
+        # Transforms to a (subclass of) MaskedArray if we don't have a scalar
+        if result.shape:
+            result = result.view(get_masked_subclass(a, b))
+            result._mask = make_mask_none(result.shape)
+            result._mask.flat = m
             if isinstance(a, MaskedArray):
                 result._update_from(a)
             if isinstance(b, MaskedArray):
                 result._update_from(b)
+        # ... or return masked if we have a scalar and the common mask is True
         elif m:
             return masked
         return result
+#        
+#        result = self.f(d1, d2, *args, **kwargs).view(get_masked_subclass(a, b))
+#        if len(result.shape):
+#            if m is not nomask:
+#                result._mask = make_mask_none(result.shape)
+#                result._mask.flat = m
+#                #!!!!!
+#                # Force m to be at least 1D
+#                m.shape = m.shape or (1,)
+#                print "Resetting data"
+#                result.data[m].flat = d1.flat
+#                #!!!!!
+#            if isinstance(a, MaskedArray):
+#                result._update_from(a)
+#            if isinstance(b, MaskedArray):
+#                result._update_from(b)
+#        elif m:
+#            return masked
+#        return result
 
     def reduce(self, target, axis=0, dtype=None):
         """Reduce `target` along the given `axis`."""
@@ -618,11 +668,13 @@
             m = umath.logical_or.outer(ma, mb)
         if (not m.ndim) and m:
             return masked
-        rcls = get_masked_subclass(a, b)
-        # We could fill the arguments first, butis it useful ?
-        # d = self.f.outer(filled(a, self.fillx), filled(b, self.filly)).view(rcls)
-        d = self.f.outer(getdata(a), getdata(b)).view(rcls)
-        if d.ndim > 0:
+        (da, db) = (getdata(a), getdata(b))
+        if m is nomask:
+            d = self.f.outer(da, db)
+        else:
+            d = np.where(m, da, self.f.outer(da, db))
+        if d.shape:
+            d = d.view(get_masked_subclass(a, b))
             d._mask = m
         return d
 
@@ -634,7 +686,7 @@
         if isinstance(target, MaskedArray):
             tclass = type(target)
         else:
-            tclass = masked_array
+            tclass = MaskedArray
         t = filled(target, self.filly)
         return self.f.accumulate(t, axis).view(tclass)
 
@@ -643,7 +695,8 @@
 
 #..............................................................................
 class _DomainedBinaryOperation:
-    """Define binary operations that have a domain, like divide.
+    """
+    Define binary operations that have a domain, like divide.
 
     They have no reduce, outer or accumulate.
 
@@ -668,25 +721,29 @@
         ufunc_domain[dbfunc] = domain
         ufunc_fills[dbfunc] = (fillx, filly)
 
-    def __call__(self, a, b):
+    def __call__(self, a, b, *args, **kwargs):
         "Execute the call behavior."
         ma = getmask(a)
         mb = getmask(b)
-        d1 = getdata(a)
-        d2 = get_data(b)
-        t = narray(self.domain(d1, d2), copy=False)
+        da = getdata(a)
+        db = getdata(b)
+        t = narray(self.domain(da, db), copy=False)
         if t.any(None):
             mb = mask_or(mb, t)
             # The following line controls the domain filling
-            if t.size == d2.size:
-                d2 = np.where(t, self.filly, d2)
+            if t.size == db.size:
+                db = np.where(t, self.filly, db)
             else:
-                d2 = np.where(np.resize(t, d2.shape), self.filly, d2)
+                db = np.where(np.resize(t, db.shape), self.filly, db)
         m = mask_or(ma, mb)
         if (not m.ndim) and m:
             return masked
-        result =  self.f(d1, d2).view(get_masked_subclass(a, b))
-        if result.ndim > 0:
+        elif (m is nomask):
+            result = self.f(da, db, *args, **kwargs)
+        else:
+            result = np.where(m, da, self.f(da, db, *args, **kwargs))
+        if result.shape:
+            result = result.view(get_masked_subclass(a, b))
             result._mask = m
             if isinstance(a, MaskedArray):
                 result._update_from(a)
@@ -780,22 +837,29 @@
     Each field is set to a bool.
 
     """
+    def _make_descr(datatype):
+        "Private function allowing recursion."
+        # Do we have some name fields ?
+        if datatype.names:
+            descr = []
+            for name in datatype.names:
+                field = datatype.fields[name]
+                if len(field) == 3:
+                    # Prepend the title to the name
+                    name = (field[-1], name)
+                descr.append((name, _make_descr(field[0])))
+            return descr
+        # Is this some kind of composite a la (np.float,2)
+        elif datatype.subdtype:
+            mdescr = list(datatype.subdtype)
+            mdescr[0] = np.dtype(bool)
+            return tuple(mdescr)
+        else:
+            return np.bool
     # Make sure we do have a dtype
     if not isinstance(ndtype, np.dtype):
         ndtype = np.dtype(ndtype)
-    # Do we have some name fields ?
-    if ndtype.names:
-        mdescr = [list(_) for _ in ndtype.descr]
-        for m in mdescr:
-            m[1] = '|b1'
-        return np.dtype([tuple(_) for _ in mdescr])
-    # Is this some kind of composite a la (np.float,2)
-    elif ndtype.subdtype:
-        mdescr = list(ndtype.subdtype)
-        mdescr[0] = np.dtype(bool)
-        return np.dtype(tuple(mdescr))
-    else:
-        return MaskType
+    return np.dtype(_make_descr(ndtype))
 
 def get_mask(a):
     """Return the mask of a, if any, or nomask.
@@ -944,6 +1008,61 @@
     return make_mask(umath.logical_or(m1, m2), copy=copy, shrink=shrink)
 
 
+def flatten_mask(mask):
+    """
+    Returns a completely flattened version of the mask, where nested fields
+    are collapsed.
+    
+    Parameters
+    ----------
+    mask : array_like
+        Array of booleans
+
+    Returns
+    -------
+    flattened_mask : ndarray
+        Boolean array.
+
+    Examples
+    --------
+    >>> mask = np.array([0, 0, 1], dtype=np.bool)
+    >>> flatten_mask(mask)
+    array([False, False,  True], dtype=bool)
+    >>> mask = np.array([(0, 0), (0, 1)], dtype=[('a', bool), ('b', bool)])
+    >>> flatten_mask(mask)
+    array([False, False, False,  True], dtype=bool)
+    >>> mdtype = [('a', bool), ('b', [('ba', bool), ('bb', bool)])]
+    >>> mask = np.array([(0, (0, 0)), (0, (0, 1))], dtype=mdtype)
+    >>> flatten_mask(mask)
+    array([False, False, False, False, False,  True], dtype=bool)
+    
+    """
+    #
+    def _flatmask(mask):
+        "Flatten the mask and returns a (maybe nested) sequence of booleans."
+        mnames = mask.dtype.names
+        if mnames:
+            return [flatten_mask(mask[name]) for name in mnames]
+        else:
+            return mask
+    #
+    def _flatsequence(sequence):
+        "Generates a flattened version of the sequence."
+        try:
+            for element in sequence:
+                if hasattr(element, '__iter__'):
+                    for f in _flatsequence(element):
+                        yield f
+                else:
+                    yield element
+        except TypeError:
+            yield sequence
+    #
+    mask = np.asarray(mask)
+    flattened = _flatsequence(_flatmask(mask))
+    return np.array([_ for _ in flattened], dtype=bool)
+
+
 #####--------------------------------------------------------------------------
 #--- --- Masking functions ---
 #####--------------------------------------------------------------------------
@@ -1208,8 +1327,8 @@
     #
     def getdoc(self):
         "Return the doc of the function (from the doc of the method)."
-        methdoc = getattr(ndarray, self.__name__, None)
-        methdoc = getattr(np, self.__name__, methdoc)
+        methdoc = getattr(ndarray, self.__name__, None) or \
+                  getattr(np, self.__name__, None)
         if methdoc is not None:
             return methdoc.__doc__
     #
@@ -1327,7 +1446,7 @@
         # Process data............
         _data = np.array(data, dtype=dtype, copy=copy, subok=True, ndmin=ndmin)
         _baseclass = getattr(data, '_baseclass', type(_data))
-        # Check that we'ew not erasing the mask..........
+        # Check that we're not erasing the mask..........
         if isinstance(data, MaskedArray) and (data.shape != _data.shape):
             copy = True
         # Careful, cls might not always be MaskedArray...
@@ -1359,7 +1478,13 @@
                     _data._mask = np.zeros(_data.shape, dtype=mdtype)
             # Check whether we missed something
             elif isinstance(data, (tuple,list)):
-                mask = np.array([getmaskarray(m) for m in data], dtype=mdtype)
+                try:
+                    # If data is a sequence of masked array
+                    mask = np.array([getmaskarray(m) for m in data],
+                                    dtype=mdtype)
+                except ValueError:
+                    # If data is nested
+                    mask = nomask
                 # Force shrinking of the mask if needed (and possible)
                 if (mdtype == MaskType) and mask.any():
                     _data._mask = mask
@@ -1392,7 +1517,7 @@
                 else:
                     msg = "Mask and data not compatible: data size is %i, "+\
                           "mask size is %i."
-                    raise MAError, msg % (nd, nm)
+                    raise MaskError, msg % (nd, nm)
                 copy = True
             # Set the mask to the new value
             if _data._mask is nomask:
@@ -1404,8 +1529,16 @@
                     _data._sharedmask = not copy
                 else:
                     if names_:
-                        for n in names_:
-                            _data._mask[n] |= mask[n]
+                        def _recursive_or(a, b):
+                            "do a|=b on each field of a, recursively"
+                            for name in a.dtype.names:
+                                (af, bf) = (a[name], b[name])
+                                if af.dtype.names:
+                                    _recursive_or(af, bf)
+                                else:
+                                    af |= bf
+                            return
+                        _recursive_or(_data._mask, mask)
                     else:
                         _data._mask = np.logical_or(mask, _data._mask)
                     _data._sharedmask = False
@@ -1601,7 +1734,7 @@
             # A record ................
             if isinstance(dout, np.void):
                 mask = _mask[indx]
-                if mask.view((bool, len(mask.dtype))).any():
+                if flatten_mask(mask).any():
                     dout = masked_array(dout, mask=mask)
                 else:
                     return dout
@@ -1633,7 +1766,7 @@
 
         """
         if self is masked:
-            raise MAError, 'Cannot alter the masked element.'
+            raise MaskError, 'Cannot alter the masked element.'
         # This test is useful, but we should keep things light...
 #        if getmask(indx) is not nomask:
 #            msg = "Masked arrays must be filled before they can be used as indices!"
@@ -2146,32 +2279,33 @@
     #............................................
     def __iadd__(self, other):
         "Add other to self in-place."
-        ndarray.__iadd__(self._data, getdata(other))
         m = getmask(other)
         if self._mask is nomask:
             self._mask = m
-        elif m is not nomask:
-            self._mask += m
+        else:
+            if m is not nomask:
+                self._mask += m
+        ndarray.__iadd__(self._data, np.where(self._mask, 0, getdata(other)))
         return self
     #....
     def __isub__(self, other):
         "Subtract other from self in-place."
-        ndarray.__isub__(self._data, getdata(other))
         m = getmask(other)
         if self._mask is nomask:
             self._mask = m
         elif m is not nomask:
             self._mask += m
+        ndarray.__isub__(self._data, np.where(self._mask, 0, getdata(other)))
         return self
     #....
     def __imul__(self, other):
         "Multiply self by other in-place."
-        ndarray.__imul__(self._data, getdata(other))
         m = getmask(other)
         if self._mask is nomask:
             self._mask = m
         elif m is not nomask:
             self._mask += m
+        ndarray.__imul__(self._data, np.where(self._mask, 1, getdata(other)))
         return self
     #....
     def __idiv__(self, other):
@@ -2184,21 +2318,25 @@
         if dom_mask.any():
             (_, fval) = ufunc_fills[np.divide]
             other_data = np.where(dom_mask, fval, other_data)
-        ndarray.__idiv__(self._data, other_data)
-        self._mask = mask_or(self._mask, new_mask)
+#        self._mask = mask_or(self._mask, new_mask)
+        self._mask |= new_mask
+        ndarray.__idiv__(self._data, np.where(self._mask, 1, other_data))
         return self
     #...
     def __ipow__(self, other):
         "Raise self to the power other, in place"
-        _data = self._data
         other_data = getdata(other)
         other_mask = getmask(other)
-        ndarray.__ipow__(_data, other_data)
-        invalid = np.logical_not(np.isfinite(_data))
+        ndarray.__ipow__(self._data, np.where(self._mask, 1, other_data))
+        invalid = np.logical_not(np.isfinite(self._data))
+        if invalid.any():
+            if self._mask is not nomask:
+                self._mask |= invalid
+            else:
+                self._mask = invalid
+            np.putmask(self._data, invalid, self.fill_value)
         new_mask = mask_or(other_mask, invalid)
         self._mask = mask_or(self._mask, new_mask)
-        # The following line is potentially problematic, as we change _data...
-        np.putmask(self._data, invalid, self.fill_value)
         return self
     #............................................
     def __float__(self):
@@ -2217,7 +2355,7 @@
             raise TypeError("Only length-1 arrays can be converted "\
                             "to Python scalars")
         elif self._mask:
-            raise MAError, 'Cannot convert masked element to a Python int.'
+            raise MaskError, 'Cannot convert masked element to a Python int.'
         return int(self.item())
     #............................................
     def get_imag(self):
@@ -2560,9 +2698,7 @@
 
     def trace(self, offset=0, axis1=0, axis2=1, dtype=None, out=None):
         """
-        Return the sum along the offset diagonal of the array's
-        indicated `axis1` and `axis2`.
-
+        (this docstring should be overwritten)
         """
         #!!!: implement out + test!
         m = self._mask
@@ -2573,8 +2709,8 @@
         else:
             D = self.diagonal(offset=offset, axis1=axis1, axis2=axis2)
             return D.astype(dtype).filled(0).sum(axis=None, out=out)
+    trace.__doc__ = ndarray.trace.__doc__
 
-
     def sum(self, axis=None, dtype=None, out=None):
         """
         Return the sum of the array elements over the given axis.
@@ -2668,8 +2804,8 @@
         have the same shape and buffer length as the expected output
         but the type will be cast if necessary.
 
-    Warning
-    -------
+    Warnings
+    --------
         The mask is lost if out is not a valid :class:`MaskedArray` !
 
     Returns
@@ -2678,8 +2814,8 @@
         A new array holding the result is returned unless ``out`` is
         specified, in which case a reference to ``out`` is returned.
 
-    Example
-    -------
+    Examples
+    --------
     >>> marr = np.ma.array(np.arange(10), mask=[0,0,0,1,1,1,0,0,0,0])
     >>> print marr.cumsum()
     [0 1 3 -- -- -- 9 16 24 33]
@@ -2824,7 +2960,14 @@
 
 
     def mean(self, axis=None, dtype=None, out=None):
-        ""
+        """
+    Returns the average of the array elements along given axis.
+    Refer to `numpy.mean` for full documentation.
+
+    See Also
+    --------
+    numpy.mean : equivalent function'
+        """
         if self._mask is nomask:
             result = super(MaskedArray, self).mean(axis=axis, dtype=dtype)
         else:
@@ -2840,7 +2983,6 @@
                 outmask.flat = getattr(result, '_mask', nomask)
             return out
         return result
-    mean.__doc__ = ndarray.mean.__doc__
 
     def anom(self, axis=None, dtype=None):
         """
@@ -2885,6 +3027,10 @@
             if out is not None:
                 if isinstance(out, MaskedArray):
                     out.__setmask__(True)
+                elif out.dtype.kind in 'biu':
+                    errmsg = "Masked data information would be lost in one or "\
+                             "more location."
+                    raise MaskError(errmsg)
                 else:
                     out.flat = np.nan
                 return out
@@ -2937,48 +3083,37 @@
     #............................................
     def argsort(self, axis=None, fill_value=None, kind='quicksort',
                 order=None):
-        """Return an ndarray of indices that sort the array along the
-        specified axis.  Masked values are filled beforehand to
-        fill_value.
+        """
+    Return an ndarray of indices that sort the array along the
+    specified axis.  Masked values are filled beforehand to
+    fill_value.
 
-        Parameters
-        ----------
-        axis : int, optional
-            Axis to be indirectly sorted.
-            If not given, uses a flatten version of the array.
-        fill_value : {var}
-            Value used to fill in the masked values.
-            If not given, self.fill_value is used instead.
-        kind : {string}
-            Sorting algorithm (default 'quicksort')
-            Possible values: 'quicksort', 'mergesort', or 'heapsort'
+    Parameters
+    ----------
+    axis : int, optional
+        Axis along which to sort.  If not given, the flattened array is used.
+    kind : {'quicksort', 'mergesort', 'heapsort'}, optional
+        Sorting algorithm.
+    order : list, optional
+        When `a` is an array with fields defined, this argument specifies
+        which fields to compare first, second, etc.  Not all fields need be
+        specified.
+    Returns
+    -------
+    index_array : ndarray, int
+        Array of indices that sort `a` along the specified axis.
+        In other words, ``a[index_array]`` yields a sorted `a`.
+    
+    See Also
+    --------
+    sort : Describes sorting algorithms used.
+    lexsort : Indirect stable sort with multiple keys.
+    ndarray.sort : Inplace sort.
 
-        Notes
-        -----
-        This method executes an indirect sort along the given axis
-        using the algorithm specified by the kind keyword. It returns
-        an array of indices of the same shape as 'a' that index data
-        along the given axis in sorted order.
+    Notes
+    -----
+    See `sort` for notes on the different sorting algorithms.
 
-        The various sorts are characterized by average speed, worst
-        case performance need for work space, and whether they are
-        stable.  A stable sort keeps items with the same key in the
-        same relative order. The three available algorithms have the
-        following properties:
-
-        |------------------------------------------------------|
-        |    kind   | speed |  worst case | work space | stable|
-        |------------------------------------------------------|
-        |'quicksort'|   1   | O(n^2)      |     0      |   no  |
-        |'mergesort'|   2   | O(n*log(n)) |    ~n/2    |   yes |
-        |'heapsort' |   3   | O(n*log(n)) |     0      |   no  |
-        |------------------------------------------------------|
-
-        All the sort algorithms make temporary copies of the data when
-        the sort is not along the last axis. Consequently, sorts along
-        the last axis are faster and use less space than sorts along
-        other axis.
-
         """
         if fill_value is None:
             fill_value = default_fill_value(self)
@@ -3069,19 +3204,21 @@
     def sort(self, axis=-1, kind='quicksort', order=None,
              endwith=True, fill_value=None):
         """
-    Sort along the given axis.
+    Return a sorted copy of an array.
 
     Parameters
     ----------
-    axis : {int}, optional
-        Axis to be indirectly sorted.
-    kind : {'quicksort', 'mergesort', or 'heapsort'}, optional
-        Sorting algorithm (default 'quicksort')
-        Possible values: 'quicksort', 'mergesort', or 'heapsort'.
-    order : {None, var}
-        If a has fields defined, then the order keyword can be the field name
-        to sort on or a list (or tuple) of field names to indicate  the order
-        that fields should be used to define the sort.
+    a : array_like
+        Array to be sorted.
+    axis : int or None, optional
+        Axis along which to sort. If None, the array is flattened before
+        sorting. The default is -1, which sorts along the last axis.
+    kind : {'quicksort', 'mergesort', 'heapsort'}, optional
+        Sorting algorithm. Default is 'quicksort'.
+    order : list, optional
+        When `a` is a structured array, this argument specifies which fields
+        to compare first, second, and so on.  This list does not need to
+        include all of the fields.
     endwith : {True, False}, optional
         Whether missing values (if any) should be forced in the upper indices
         (at the end of the array) (True) or lower indices (at the beginning).
@@ -3091,30 +3228,68 @@
 
     Returns
     -------
-    - When used as method, returns None.
-    - When used as a function, returns an array.
+    sorted_array : ndarray
+        Array of the same type and shape as `a`.
 
+    See Also
+    --------
+    ndarray.sort : Method to sort an array in-place.
+    argsort : Indirect sort.
+    lexsort : Indirect stable sort on multiple keys.
+    searchsorted : Find elements in a sorted array.
+
     Notes
     -----
-    This method sorts 'a' in place along the given axis using
-    the algorithm specified by the kind keyword.
+    The various sorting algorithms are characterized by their average speed,
+    worst case performance, work space size, and whether they are stable. A
+    stable sort keeps items with the same key in the same relative
+    order. The three available algorithms have the following
+    properties:
 
-    The various sorts may characterized by average speed,
-    worst case performance need for work space, and whether
-    they are stable.  A stable sort keeps items with the same
-    key in the same relative order and is most useful when
-    used w/ argsort where the key might differ from the items
-    being sorted.  The three available algorithms have the
-    following properties:
+    =========== ======= ============= ============ =======
+       kind      speed   worst case    work space  stable
+    =========== ======= ============= ============ =======
+    'quicksort'    1     O(n^2)            0          no
+    'mergesort'    2     O(n*log(n))      ~n/2        yes
+    'heapsort'     3     O(n*log(n))       0          no
+    =========== ======= ============= ============ =======
 
-    |------------------------------------------------------|
-    |    kind   | speed |  worst case | work space | stable|
-    |------------------------------------------------------|
-    |'quicksort'|   1   | O(n^2)      |     0      |   no  |
-    |'mergesort'|   2   | O(n*log(n)) |    ~n/2    |   yes |
-    |'heapsort' |   3   | O(n*log(n)) |     0      |   no  |
-    |------------------------------------------------------|
+    All the sort algorithms make temporary copies of the data when
+    sorting along any but the last axis.  Consequently, sorting along
+    the last axis is faster and uses less space than sorting along
+    any other axis.
 
+    Examples
+    --------
+    >>> a = np.array([[1,4],[3,1]])
+    >>> np.sort(a)                # sort along the last axis
+    array([[1, 4],
+           [1, 3]])
+    >>> np.sort(a, axis=None)     # sort the flattened array
+    array([1, 1, 3, 4])
+    >>> np.sort(a, axis=0)        # sort along the first axis
+    array([[1, 1],
+           [3, 4]])
+
+    Use the `order` keyword to specify a field to use when sorting a
+    structured array:
+
+    >>> dtype = [('name', 'S10'), ('height', float), ('age', int)]
+    >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38),
+    ...           ('Galahad', 1.7, 38)]
+    >>> a = np.array(values, dtype=dtype)       # create a structured array
+    >>> np.sort(a, order='height')                        # doctest: +SKIP
+    array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41),
+           ('Lancelot', 1.8999999999999999, 38)],
+          dtype=[('name', '|S10'), ('height', '<f8'), ('age', '<i4')])
+
+    Sort by age, then height if ages are equal:
+
+    >>> np.sort(a, order=['age', 'height'])               # doctest: +SKIP
+    array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38),
+           ('Arthur', 1.8, 41)],
+          dtype=[('name', '|S10'), ('height', '<f8'), ('age', '<i4')])
+
         """
         if self._mask is nomask:
             ndarray.sort(self, axis=axis, kind=kind, order=order)
@@ -3189,6 +3364,10 @@
                 outmask = out._mask = make_mask_none(out.shape)
             outmask.flat = newmask
         else:
+            if out.dtype.kind in 'biu':
+                errmsg = "Masked data information would be lost in one or more"\
+                         " location."
+                raise MaskError(errmsg)
             np.putmask(out, newmask, np.nan)
         return out
 
@@ -3251,6 +3430,11 @@
                 outmask = out._mask = make_mask_none(out.shape)
             outmask.flat = newmask
         else:
+            
+            if out.dtype.kind in 'biu':
+                errmsg = "Masked data information would be lost in one or more"\
+                         " location."
+                raise MaskError(errmsg)
             np.putmask(out, newmask, np.nan)
         return out
 
@@ -3633,12 +3817,16 @@
     def __init__(self, methodname):
         self.__name__ = methodname
         self.__doc__ = self.getdoc()
+    #
     def getdoc(self):
         "Return the doc of the function (from the doc of the method)."
-        try:
-            return getattr(MaskedArray, self.__name__).__doc__
-        except:
-            return getattr(np, self.__name__).__doc__
+        meth = getattr(MaskedArray, self.__name__, None) or\
+               getattr(np, self.__name__, None)
+        signature = self.__name__ + get_object_signature(meth)
+        if meth is not None:
+            doc = """    %s\n%s""" % (signature, getattr(meth, '__doc__', None))
+            return doc
+    #
     def __call__(self, a, *args, **params):
         if isinstance(a, MaskedArray):
             return getattr(a, self.__name__).__call__(*args, **params)
@@ -3657,25 +3845,29 @@
 all = _frommethod('all')
 anomalies = anom = _frommethod('anom')
 any = _frommethod('any')
-conjugate = _frommethod('conjugate')
+compress = _frommethod('compress')
+cumprod = _frommethod('cumprod')
+cumsum = _frommethod('cumsum')
+copy = _frommethod('copy')
+diagonal = _frommethod('diagonal')
+harden_mask = _frommethod('harden_mask')
 ids = _frommethod('ids')
-nonzero = _frommethod('nonzero')
-diagonal = _frommethod('diagonal')
 maximum = _maximum_operation()
 mean = _frommethod('mean')
 minimum = _minimum_operation ()
+nonzero = _frommethod('nonzero')
+prod = _frommethod('prod')
 product = _frommethod('prod')
-ptp = _frommethod('ptp')
 ravel = _frommethod('ravel')
 repeat = _frommethod('repeat')
-round = _frommethod('round')
+shrink_mask = _frommethod('shrink_mask')
+soften_mask = _frommethod('soften_mask')
 std = _frommethod('std')
 sum = _frommethod('sum')
 swapaxes = _frommethod('swapaxes')
 take = _frommethod('take')
 trace = _frommethod('trace')
 var = _frommethod('var')
-compress = _frommethod('compress')
 
 #..............................................................................
 def power(a, b, third=None):
@@ -3683,7 +3875,7 @@
 
     """
     if third is not None:
-        raise MAError, "3-argument power not supported."
+        raise MaskError, "3-argument power not supported."
     # Get the masks
     ma = getmask(a)
     mb = getmask(b)
@@ -3697,22 +3889,22 @@
     else:
         basetype = MaskedArray
     # Get the result and view it as a (subclass of) MaskedArray
-    result = umath.power(fa, fb).view(basetype)
+    result = np.where(m, fa, umath.power(fa, fb)).view(basetype)
+    result._update_from(a)
     # Find where we're in trouble w/ NaNs and Infs
     invalid = np.logical_not(np.isfinite(result.view(ndarray)))
-    # Retrieve some extra attributes if needed
-    if isinstance(result, MaskedArray):
-        result._update_from(a)
     # Add the initial mask
     if m is not nomask:
-        if np.isscalar(result):
+        if not (result.ndim):
             return masked
+        m |= invalid
         result._mask = m
     # Fix the invalid parts
     if invalid.any():
         if not result.ndim:
             return masked
-        result[invalid] = masked
+        elif result._mask is nomask:
+            result._mask = invalid
         result._data[invalid] = result.fill_value
     return result
 
@@ -3825,6 +4017,40 @@
 count.__doc__ = MaskedArray.count.__doc__
 
 
+def diag(v, k=0):
+    """
+    Extract a diagonal or construct a diagonal array.
+
+    Parameters
+    ----------
+    v : array_like
+        If `v` is a 2-dimensional array, return a copy of
+        its `k`-th diagonal. If `v` is a 1-dimensional array,
+        return a 2-dimensional array with `v` on the `k`-th diagonal.
+    k : int, optional
+        Diagonal in question.  The defaults is 0.
+
+    Examples
+    --------
+    >>> x = np.arange(9).reshape((3,3))
+    >>> x
+    array([[0, 1, 2],
+           [3, 4, 5],
+           [6, 7, 8]])
+    >>> np.diag(x)
+    array([0, 4, 8])
+    >>> np.diag(np.diag(x))
+    array([[0, 0, 0],
+           [0, 4, 0],
+           [0, 0, 8]])
+
+    """
+    output = np.diag(v, k).view(MaskedArray)
+    if getmask(v) is not nomask:
+        output._mask = np.diag(v._mask, k)
+    return output
+
+
 def expand_dims(x, axis):
     """
     Expand the shape of the array by including a new axis before
@@ -4085,7 +4311,8 @@
 
 
 def round_(a, decimals=0, out=None):
-    """Return a copy of a, rounded to 'decimals' places.
+    """
+    Return a copy of a, rounded to 'decimals' places.
 
     When 'decimals' is negative, it specifies the number of positions
     to the left of the decimal point.  The real and imaginary parts of
@@ -4114,9 +4341,20 @@
         if hasattr(out, '_mask'):
             out._mask = getmask(a)
         return out
+round = round_
 
+def inner(a, b):
+    """
+    Returns the inner product of a and b for arrays of floating point types.
 
-def inner(a, b):
+    Like the generic NumPy equivalent the product sum is over the last dimension
+    of a and b. 
+    
+    Notes
+    -----
+    The first argument is not conjugated.
+
+    """
     fa = filled(a, 0)
     fb = filled(b, 0)
     if len(fa.shape) == 0:
@@ -4167,7 +4405,7 @@
 
 def allclose (a, b, masked_equal=True, rtol=1.e-5, atol=1.e-8, fill_value=None):
     """
-        Returns True if two arrays are element-wise equal within a tolerance.
+    Returns True if two arrays are element-wise equal within a tolerance.
 
     The tolerance values are positive, typically very small numbers.  The
     relative difference (`rtol` * `b`) and the absolute difference (`atol`)
@@ -4332,9 +4570,18 @@
     def __init__(self, funcname):
         self._func = getattr(np, funcname)
         self.__doc__ = self.getdoc()
+    #
     def getdoc(self):
         "Return the doc of the function (from the doc of the method)."
-        return self._func.__doc__
+        doc = getattr(self._func, '__doc__', None)
+        sig = get_object_signature(self._func)
+        if doc:
+            # Add the signature of the function at the beginning of the doc
+            if sig:
+                sig = "%s%s\n" % (self._func.__name__, sig)
+            doc = sig + doc
+        return doc
+    #
     def __call__(self, a, *args, **params):
         return self._func.__call__(a, *args, **params).view(MaskedArray)
 
@@ -4348,5 +4595,6 @@
 indices = np.indices
 ones = _convert2ma('ones')
 zeros = _convert2ma('zeros')
+squeeze = np.squeeze
 
 ###############################################################################

Modified: branches/dynamic_cpu_configuration/numpy/ma/extras.py
===================================================================
--- branches/dynamic_cpu_configuration/numpy/ma/extras.py	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/numpy/ma/extras.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -16,15 +16,15 @@
            'column_stack','compress_cols','compress_rowcols', 'compress_rows',
            'count_masked', 'corrcoef', 'cov',
            'diagflat', 'dot','dstack',
-           'ediff1d','expand_dims',
-           'flatnotmasked_contiguous','flatnotmasked_edges',
-           'hsplit','hstack',
-           'mask_cols','mask_rowcols','mask_rows','masked_all','masked_all_like',
-           'median','mr_',
-           'notmasked_contiguous','notmasked_edges',
+           'ediff1d',
+           'flatnotmasked_contiguous', 'flatnotmasked_edges',
+           'hsplit', 'hstack',
+           'mask_cols', 'mask_rowcols', 'mask_rows', 'masked_all',
+           'masked_all_like', 'median', 'mr_',
+           'notmasked_contiguous', 'notmasked_edges',
            'polyfit',
            'row_stack',
-           'vander','vstack',
+           'vander', 'vstack',
            ]
 
 from itertools import groupby
@@ -32,15 +32,15 @@
 
 import core as ma
 from core import MaskedArray, MAError, add, array, asarray, concatenate, count,\
-    filled, getmask, getmaskarray, masked, masked_array, mask_or, nomask, ones,\
-    sort, zeros
+    filled, getmask, getmaskarray, make_mask_descr, masked, masked_array,\
+    mask_or, nomask, ones, sort, zeros
 #from core import *
 
 import numpy as np
 from numpy import ndarray, array as nxarray
 import numpy.core.umath as umath
 from numpy.lib.index_tricks import AxisConcatenator
-from numpy.lib.polynomial import _lstsq
+from numpy.linalg import lstsq
 
 #...............................................................................
 def issequence(seq):
@@ -77,7 +77,7 @@
 
     """
     a = masked_array(np.empty(shape, dtype),
-                     mask=np.ones(shape, bool))
+                     mask=np.ones(shape, make_mask_descr(dtype)))
     return a
 
 def masked_all_like(arr):
@@ -85,8 +85,8 @@
     the array `a`, where all the data are masked.
 
     """
-    a = masked_array(np.empty_like(arr),
-                     mask=np.ones(arr.shape, bool))
+    a = np.empty_like(arr).view(MaskedArray)
+    a._mask = np.ones(a.shape, dtype=make_mask_descr(a.dtype))
     return a
 
 
@@ -102,11 +102,13 @@
 
     def getdoc(self):
         "Retrieves the __doc__ string from the function."
-        inidoc = getattr(np, self.__name__).__doc__
-        if inidoc:
+        npfunc = getattr(np, self.__name__, None)
+        doc = getattr(npfunc, '__doc__', None)
+        if doc:
+            sig = self.__name__ + ma.get_object_signature(npfunc)
             locdoc = "Notes\n-----\nThe function is applied to both the _data"\
                      " and the _mask, if any."
-            return '\n'.join((inidoc, locdoc))
+            return '\n'.join((sig, doc, locdoc))
         return
 
 
@@ -147,16 +149,6 @@
 
 diagflat = _fromnxfunction('diagflat')
 
-def expand_dims(a, axis):
-    """Expands the shape of a by including newaxis before axis.
-    """
-    if not isinstance(a, MaskedArray):
-        return np.expand_dims(a, axis)
-    elif getmask(a) is nomask:
-        return np.expand_dims(a, axis).view(MaskedArray)
-    m = getmaskarray(a)
-    return masked_array(np.expand_dims(a, axis),
-                        mask=np.expand_dims(m, axis))
 
 #####--------------------------------------------------------------------------
 #----
@@ -172,10 +164,9 @@
 
 
 def apply_along_axis(func1d, axis, arr, *args, **kwargs):
-    """Execute func1d(arr[i],*args) where func1d takes 1-D arrays and
-    arr is an N-d array.  i varies so as to apply the function along
-    the given axis for each 1-d subarray in arr.
     """
+    (This docstring should be overwritten)
+    """
     arr = array(arr, copy=False, subok=True)
     nd = arr.ndim
     if axis < 0:
@@ -257,7 +248,9 @@
         result = asarray(outarr, dtype=max_dtypes)
         result.fill_value = ma.default_fill_value(result)
     return result
+apply_along_axis.__doc__ = np.apply_along_axis.__doc__
 
+
 def average(a, axis=None, weights=None, returned=False):
     """Average the array over the given axis.
 
@@ -446,23 +439,25 @@
 
 #..............................................................................
 def compress_rowcols(x, axis=None):
-    """Suppress the rows and/or columns of a 2D array that contains
+    """
+    Suppress the rows and/or columns of a 2D array that contains
     masked values.
 
     The suppression behavior is selected with the `axis`parameter.
+
         - If axis is None, rows and columns are suppressed.
         - If axis is 0, only rows are suppressed.
         - If axis is 1 or -1, only columns are suppressed.
 
     Parameters
     ----------
-        axis : int, optional
-            Axis along which to perform the operation.
-            If None, applies to a flattened version of the array.
+    axis : int, optional
+        Axis along which to perform the operation.
+        If None, applies to a flattened version of the array.
 
     Returns
     -------
-        compressed_array : an ndarray.
+    compressed_array : an ndarray.
 
     """
     x = asarray(x)
@@ -499,9 +494,10 @@
     return compress_rowcols(a, 1)
 
 def mask_rowcols(a, axis=None):
-    """Mask whole rows and/or columns of a 2D array that contain
+    """
+    Mask whole rows and/or columns of a 2D array that contain
     masked values.  The masking behavior is selected with the
-    `axis`parameter.
+    `axis` parameter.
 
         - If axis is None, rows and columns are masked.
         - If axis is 0, only rows are masked.
@@ -509,13 +505,13 @@
 
     Parameters
     ----------
-        axis : int, optional
-            Axis along which to perform the operation.
-            If None, applies to a flattened version of the array.
+    axis : int, optional
+        Axis along which to perform the operation.
+        If None, applies to a flattened version of the array.
 
     Returns
     -------
-         a *pure* ndarray.
+     a *pure* ndarray.
 
     """
     a = asarray(a)
@@ -996,128 +992,20 @@
 #####--------------------------------------------------------------------------
 
 def vander(x, n=None):
-    """%s
-    Notes
-    -----
-        Masked values in x will result in rows of zeros.
     """
+    Masked values in the input array result in rows of zeros.
+    """
     _vander = np.vander(x, n)
     m = getmask(x)
     if m is not nomask:
         _vander[m] = 0
     return _vander
+vander.__doc__ = ma.doc_note(np.vander.__doc__, vander.__doc__)
 
 
 def polyfit(x, y, deg, rcond=None, full=False):
     """
-    Least squares polynomial fit.
-
-    Do a best fit polynomial of degree 'deg' of 'x' to 'y'.  Return value is a
-    vector of polynomial coefficients [pk ... p1 p0].  Eg, for ``deg = 2``::
-
-        p2*x0^2 +  p1*x0 + p0 = y1
-        p2*x1^2 +  p1*x1 + p0 = y1
-        p2*x2^2 +  p1*x2 + p0 = y2
-        .....
-        p2*xk^2 +  p1*xk + p0 = yk
-
-    Parameters
-    ----------
-    x : array_like
-        1D vector of sample points.
-    y : array_like
-        1D vector or 2D array of values to fit. The values should run down the
-        columns in the 2D case.
-    deg : integer
-        Degree of the fitting polynomial
-    rcond: {None, float}, optional
-        Relative condition number of the fit. Singular values smaller than this
-        relative to the largest singular value will be ignored. The defaul value
-        is len(x)*eps, where eps is the relative precision of the float type,
-        about 2e-16 in most cases.
-    full : {False, boolean}, optional
-        Switch determining nature of return value. When it is False just the
-        coefficients are returned, when True diagnostic information from the
-        singular value decomposition is also returned.
-
-    Returns
-    -------
-    coefficients, [residuals, rank, singular_values, rcond] : variable
-        When full=False, only the coefficients are returned, running down the
-        appropriate colume when y is a 2D array. When full=True, the rank of the
-        scaled Vandermonde matrix, its effective rank in light of the rcond
-        value, its singular values, and the specified value of rcond are also
-        returned.
-
-    Warns
-    -----
-    RankWarning : if rank is reduced and not full output
-        The warnings can be turned off by:
-        >>> import warnings
-        >>> warnings.simplefilter('ignore',np.RankWarning)
-
-
-    See Also
-    --------
-    polyval : computes polynomial values.
-
-    Notes
-    -----
-    If X is a the Vandermonde Matrix computed from x (see
-    http://mathworld.wolfram.com/VandermondeMatrix.html), then the
-    polynomial least squares solution is given by the 'p' in
-
-        X*p = y
-
-    where X.shape is a matrix of dimensions (len(x), deg + 1), p is a vector of
-    dimensions (deg + 1, 1), and y is a vector of dimensions (len(x), 1).
-
-    This equation can be solved as
-
-        p = (XT*X)^-1 * XT * y
-
-    where XT is the transpose of X and -1 denotes the inverse. However, this
-    method is susceptible to rounding errors and generally the singular value
-    decomposition of the matrix X is preferred and that is what is done here.
-    The singular value method takes a paramenter, 'rcond', which sets a limit on
-    the relative size of the smallest singular value to be used in solving the
-    equation. This may result in lowering the rank of the Vandermonde matrix, in
-    which case a RankWarning is issued. If polyfit issues a RankWarning, try a
-    fit of lower degree or replace x by x - x.mean(), both of which will
-    generally improve the condition number. The routine already normalizes the
-    vector x by its maximum absolute value to help in this regard. The rcond
-    parameter can be set to a value smaller than its default, but the resulting
-    fit may be spurious. The current default value of rcond is len(x)*eps, where
-    eps is the relative precision of the floating type being used, generally
-    around 1e-7 and 2e-16 for IEEE single and double precision respectively.
-    This value of rcond is fairly conservative but works pretty well when x -
-    x.mean() is used in place of x.
-
-
-    DISCLAIMER: Power series fits are full of pitfalls for the unwary once the
-    degree of the fit becomes large or the interval of sample points is badly
-    centered. The problem is that the powers x**n are generally a poor basis for
-    the polynomial functions on the sample interval, resulting in a Vandermonde
-    matrix is ill conditioned and coefficients sensitive to rounding erros. The
-    computation of the polynomial values will also sensitive to rounding errors.
-    Consequently, the quality of the polynomial fit should be checked against
-    the data whenever the condition number is large.  The quality of polynomial
-    fits *can not* be taken for granted. If all you want to do is draw a smooth
-    curve through the y values and polyfit is not doing the job, try centering
-    the sample range or look into scipy.interpolate, which includes some nice
-    spline fitting functions that may be of use.
-
-    For more info, see
-    http://mathworld.wolfram.com/LeastSquaresFittingPolynomial.html,
-    but note that the k's and n's in the superscripts and subscripts
-    on that page.  The linear algebra is correct, however.
-
-
-
-    Notes
-    -----
-        Any masked values in x is propagated in y, and vice-versa.
-
+    Any masked values in x is propagated in y, and vice-versa.
     """
     order = int(deg) + 1
     x = asarray(x)
@@ -1145,7 +1033,7 @@
         x = x / scale
     # solve least squares equation for powers of x
     v = vander(x, order)
-    c, resids, rank, s = _lstsq(v, y.filled(0), rcond)
+    c, resids, rank, s = lstsq(v, y.filled(0), rcond)
     # warn on rank reduction, which indicates an ill conditioned matrix
     if rank != order and not full:
         warnings.warn("Polyfit may be poorly conditioned", np.RankWarning)
@@ -1159,5 +1047,6 @@
         return c, resids, rank, s, rcond
     else :
         return c
+polyfit.__doc__ = ma.doc_note(np.polyfit.__doc__, polyfit.__doc__)
 
 ################################################################################

Modified: branches/dynamic_cpu_configuration/numpy/ma/tests/test_core.py
===================================================================
--- branches/dynamic_cpu_configuration/numpy/ma/tests/test_core.py	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/numpy/ma/tests/test_core.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -693,7 +693,7 @@
     def test_minmax_funcs_with_output(self):
         "Tests the min/max functions with explicit outputs"
         mask = np.random.rand(12).round()
-        xm = array(np.random.uniform(0,10,12),mask=mask)
+        xm = array(np.random.uniform(0,10,12), mask=mask)
         xm.shape = (3,4)
         for funcname in ('min', 'max'):
             # Initialize
@@ -701,11 +701,16 @@
             mafunc = getattr(numpy.ma.core, funcname)
             # Use the np version
             nout = np.empty((4,), dtype=int)
-            result = npfunc(xm,axis=0,out=nout)
+            try:
+                result = npfunc(xm, axis=0, out=nout)
+            except MaskError:
+                pass
+            nout = np.empty((4,), dtype=float)
+            result = npfunc(xm, axis=0, out=nout)
             self.failUnless(result is nout)
             # Use the ma version
             nout.fill(-999)
-            result = mafunc(xm,axis=0,out=nout)
+            result = mafunc(xm, axis=0, out=nout)
             self.failUnless(result is nout)
 
 
@@ -820,6 +825,7 @@
             self.failUnless(result is output)
             self.failUnless(output[0] is masked)
 
+
 #------------------------------------------------------------------------------
 
 class TestMaskedArrayAttributes(TestCase):
@@ -1235,23 +1241,132 @@
 
     def test_inplace_division_misc(self):
         #
-        x = np.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.])
-        y = np.array([5.,0.,3., 2., -1., -4., 0., -10., 10., 1., 0., 3.])
-        m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]
-        m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 ,0, 1]
+        x = [1., 1., 1.,-2., pi/2.,  4., 5., -10., 10., 1., 2., 3.]
+        y = [5., 0., 3., 2.,   -1., -4., 0., -10., 10., 1., 0., 3.]
+        m1 = [1,  0,  0,  0,     0,   0, 1,     0,   0,  0,  0, 0]
+        m2 = [0,  0,  1,  0,     0,   1, 1,     0,   0,  0 , 0, 1]
         xm = masked_array(x, mask=m1)
         ym = masked_array(y, mask=m2)
         #
         z = xm/ym
         assert_equal(z._mask, [1,1,1,0,0,1,1,0,0,0,1,1])
-        assert_equal(z._data, [0.2,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.])
+        assert_equal(z._data, [1.,1.,1.,-1.,-pi/2.,4.,5.,1.,1.,1.,2.,3.])
+        #assert_equal(z._data, [0.2,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.])
         #
         xm = xm.copy()
         xm /= ym
         assert_equal(xm._mask, [1,1,1,0,0,1,1,0,0,0,1,1])
-        assert_equal(xm._data, [1/5.,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.])
+        assert_equal(z._data, [1.,1.,1.,-1.,-pi/2.,4.,5.,1.,1.,1.,2.,3.])
+        #assert_equal(xm._data, [1/5.,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.])
 
 
+    def test_datafriendly_add(self):
+        "Test keeping data w/ (inplace) addition"
+        x = array([1, 2, 3], mask=[0, 0, 1])
+        # Test add w/ scalar
+        xx = x + 1
+        assert_equal(xx.data, [2, 3, 3])
+        assert_equal(xx.mask, [0, 0, 1])
+        # Test iadd w/ scalar
+        x += 1
+        assert_equal(x.data, [2, 3, 3])
+        assert_equal(x.mask, [0, 0, 1])
+        # Test add w/ array
+        x = array([1, 2, 3], mask=[0, 0, 1])
+        xx = x + array([1, 2, 3], mask=[1, 0, 0])
+        assert_equal(xx.data, [1, 4, 3])
+        assert_equal(xx.mask, [1, 0, 1])
+        # Test iadd w/ array
+        x = array([1, 2, 3], mask=[0, 0, 1])
+        x += array([1, 2, 3], mask=[1, 0, 0])
+        assert_equal(x.data, [1, 4, 3])
+        assert_equal(x.mask, [1, 0, 1])
+
+
+    def test_datafriendly_sub(self):
+        "Test keeping data w/ (inplace) subtraction"
+        # Test sub w/ scalar
+        x = array([1, 2, 3], mask=[0, 0, 1])
+        xx = x - 1
+        assert_equal(xx.data, [0, 1, 3])
+        assert_equal(xx.mask, [0, 0, 1])
+        # Test isub w/ scalar
+        x = array([1, 2, 3], mask=[0, 0, 1])
+        x -= 1
+        assert_equal(x.data, [0, 1, 3])
+        assert_equal(x.mask, [0, 0, 1])
+        # Test sub w/ array
+        x = array([1, 2, 3], mask=[0, 0, 1])
+        xx = x - array([1, 2, 3], mask=[1, 0, 0])
+        assert_equal(xx.data, [1, 0, 3])
+        assert_equal(xx.mask, [1, 0, 1])
+        # Test isub w/ array
+        x = array([1, 2, 3], mask=[0, 0, 1])
+        x -= array([1, 2, 3], mask=[1, 0, 0])
+        assert_equal(x.data, [1, 0, 3])
+        assert_equal(x.mask, [1, 0, 1])
+
+
+    def test_datafriendly_mul(self):
+        "Test keeping data w/ (inplace) multiplication"
+        # Test mul w/ scalar
+        x = array([1, 2, 3], mask=[0, 0, 1])
+        xx = x * 2
+        assert_equal(xx.data, [2, 4, 3])
+        assert_equal(xx.mask, [0, 0, 1])
+        # Test imul w/ scalar
+        x = array([1, 2, 3], mask=[0, 0, 1])
+        x *= 2
+        assert_equal(x.data, [2, 4, 3])
+        assert_equal(x.mask, [0, 0, 1])
+        # Test mul w/ array
+        x = array([1, 2, 3], mask=[0, 0, 1])
+        xx = x * array([10, 20, 30], mask=[1, 0, 0])
+        assert_equal(xx.data, [1, 40, 3])
+        assert_equal(xx.mask, [1, 0, 1])
+        # Test imul w/ array
+        x = array([1, 2, 3], mask=[0, 0, 1])
+        x *= array([10, 20, 30], mask=[1, 0, 0])
+        assert_equal(x.data, [1, 40, 3])
+        assert_equal(x.mask, [1, 0, 1])
+
+
+    def test_datafriendly_div(self):
+        "Test keeping data w/ (inplace) division"
+        # Test div on scalar
+        x = array([1, 2, 3], mask=[0, 0, 1])
+        xx = x / 2.
+        assert_equal(xx.data, [1/2., 2/2., 3])
+        assert_equal(xx.mask, [0, 0, 1])
+        # Test idiv on scalar
+        x = array([1., 2., 3.], mask=[0, 0, 1])
+        x /= 2.
+        assert_equal(x.data, [1/2., 2/2., 3])
+        assert_equal(x.mask, [0, 0, 1])
+        # Test div on array
+        x = array([1., 2., 3.], mask=[0, 0, 1])
+        xx = x / array([10., 20., 30.], mask=[1, 0, 0])
+        assert_equal(xx.data, [1., 2./20., 3.])
+        assert_equal(xx.mask, [1, 0, 1])
+        # Test idiv on array
+        x = array([1., 2., 3.], mask=[0, 0, 1])
+        x /= array([10., 20., 30.], mask=[1, 0, 0])
+        assert_equal(x.data, [1., 2/20., 3.])
+        assert_equal(x.mask, [1, 0, 1])
+
+
+    def test_datafriendly_pow(self):
+        "Test keeping data w/ (inplace) power"
+        # Test pow on scalar
+        x = array([1., 2., 3.], mask=[0, 0, 1])
+        xx = x ** 2.5
+        assert_equal(xx.data, [1., 2.**2.5, 3.])
+        assert_equal(xx.mask, [0, 0, 1])
+        # Test ipow on scalar
+        x **= 2.5
+        assert_equal(x.data, [1., 2.**2.5, 3])
+        assert_equal(x.mask, [0, 0, 1])
+
 #------------------------------------------------------------------------------
 
 class TestMaskedArrayMethods(TestCase):
@@ -1952,6 +2067,22 @@
             _ = method(out=nout, ddof=1)
             self.failUnless(np.isnan(nout))
 
+
+    def test_diag(self):
+        "Test diag"
+        x = arange(9).reshape((3,3))
+        x[1,1] = masked
+        out = np.diag(x)
+        assert_equal(out, [0, 4, 8])
+        out = diag(x)
+        assert_equal(out, [0, 4, 8])
+        assert_equal(out.mask, [0, 1, 0])
+        out = diag(out)
+        control = array([[0, 0, 0], [0, 4, 0], [0, 0, 8]], 
+                        mask = [[0, 0, 0], [0, 1, 0], [0, 0, 0]])
+        assert_equal(out, control)
+
+
 #------------------------------------------------------------------------------
 
 class TestMaskedArrayMathMethodsComplex(TestCase):
@@ -2132,8 +2263,8 @@
 
     def test_power(self):
         x = -1.1
-        assert_almost_equal(power(x,2.), 1.21)
-        self.failUnless(power(x,masked) is masked)
+        assert_almost_equal(power(x, 2.), 1.21)
+        self.failUnless(power(x, masked) is masked)
         x = array([-1.1,-1.1,1.1,1.1,0.])
         b = array([0.5,2.,0.5,2.,-1.], mask=[0,0,0,0,1])
         y = power(x,b)
@@ -2312,17 +2443,31 @@
 
     def test_make_mask_descr(self):
         "Test make_mask_descr"
+        # Flexible
         ntype = [('a',np.float), ('b',np.float)]
         test = make_mask_descr(ntype)
         assert_equal(test, [('a',np.bool),('b',np.bool)])
-        #
+        # Standard w/ shape
         ntype = (np.float, 2)
         test = make_mask_descr(ntype)
         assert_equal(test, (np.bool,2))
-        #
+        # Standard standard
         ntype = np.float
         test = make_mask_descr(ntype)
         assert_equal(test, np.dtype(np.bool))
+        # Nested
+        ntype = [('a', np.float), ('b', [('ba', np.float), ('bb', np.float)])]
+        test = make_mask_descr(ntype)
+        control = np.dtype([('a', 'b1'), ('b', [('ba', 'b1'), ('bb', 'b1')])])
+        assert_equal(test, control)
+        # Named+ shape
+        ntype = [('a', (np.float, 2))]
+        test = make_mask_descr(ntype)
+        assert_equal(test, np.dtype([('a', (np.bool, 2))]))
+        # 2 names
+        ntype = [(('A', 'a'), float)]
+        test = make_mask_descr(ntype)
+        assert_equal(test, np.dtype([(('A', 'a'), bool)]))
 
 
     def test_make_mask(self):
@@ -2388,6 +2533,24 @@
             pass
 
 
+    def test_flatten_mask(self):
+        "Tests flatten mask"
+        # Standarad dtype
+        mask = np.array([0, 0, 1], dtype=np.bool)
+        assert_equal(flatten_mask(mask), mask)
+        # Flexible dtype
+        mask = np.array([(0, 0), (0, 1)], dtype=[('a', bool), ('b', bool)])
+        test = flatten_mask(mask)
+        control = np.array([0, 0, 0, 1], dtype=bool)
+        assert_equal(test, control)
+        
+        mdtype = [('a', bool), ('b', [('ba', bool), ('bb', bool)])]
+        data = [(0, (0, 0)), (0, (0, 1))]
+        mask = np.array(data, dtype=mdtype)
+        test = flatten_mask(mask)
+        control = np.array([ 0, 0, 0, 0, 0, 1], dtype=bool)
+        assert_equal(test, control)
+
 #------------------------------------------------------------------------------
 
 class TestMaskedFields(TestCase):

Modified: branches/dynamic_cpu_configuration/numpy/ma/tests/test_extras.py
===================================================================
--- branches/dynamic_cpu_configuration/numpy/ma/tests/test_extras.py	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/numpy/ma/tests/test_extras.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -17,6 +17,62 @@
 from numpy.ma.core import *
 from numpy.ma.extras import *
 
+
+class TestGeneric(TestCase):
+    #
+    def test_masked_all(self):
+        "Tests masked_all"
+        # Standard dtype 
+        test = masked_all((2,), dtype=float)
+        control = array([1, 1], mask=[1, 1], dtype=float)
+        assert_equal(test, control)
+        # Flexible dtype
+        dt = np.dtype({'names': ['a', 'b'], 'formats': ['f', 'f']})
+        test = masked_all((2,), dtype=dt)
+        control = array([(0, 0), (0, 0)], mask=[(1, 1), (1, 1)], dtype=dt)
+        assert_equal(test, control)
+        test = masked_all((2,2), dtype=dt)
+        control = array([[(0, 0), (0, 0)], [(0, 0), (0, 0)]],
+                        mask=[[(1, 1), (1, 1)], [(1, 1), (1, 1)]],
+                        dtype=dt)
+        assert_equal(test, control)
+        # Nested dtype
+        dt = np.dtype([('a','f'), ('b', [('ba', 'f'), ('bb', 'f')])])
+        test = masked_all((2,), dtype=dt)
+        control = array([(1, (1, 1)), (1, (1, 1))],
+                         mask=[(1, (1, 1)), (1, (1, 1))], dtype=dt)
+        assert_equal(test, control)
+        test = masked_all((2,), dtype=dt)
+        control = array([(1, (1, 1)), (1, (1, 1))],
+                         mask=[(1, (1, 1)), (1, (1, 1))], dtype=dt)
+        assert_equal(test, control)
+        test = masked_all((1,1), dtype=dt)
+        control = array([[(1, (1, 1))]], mask=[[(1, (1, 1))]], dtype=dt)
+        assert_equal(test, control)
+
+
+    def test_masked_all_like(self):
+        "Tests masked_all"
+        # Standard dtype 
+        base = array([1, 2], dtype=float)
+        test = masked_all_like(base)
+        control = array([1, 1], mask=[1, 1], dtype=float)
+        assert_equal(test, control)
+        # Flexible dtype
+        dt = np.dtype({'names': ['a', 'b'], 'formats': ['f', 'f']})
+        base = array([(0, 0), (0, 0)], mask=[(1, 1), (1, 1)], dtype=dt)
+        test = masked_all_like(base)
+        control = array([(10, 10), (10, 10)], mask=[(1, 1), (1, 1)], dtype=dt)
+        assert_equal(test, control)
+        # Nested dtype
+        dt = np.dtype([('a','f'), ('b', [('ba', 'f'), ('bb', 'f')])])
+        control = array([(1, (1, 1)), (1, (1, 1))],
+                        mask=[(1, (1, 1)), (1, (1, 1))], dtype=dt)
+        test = masked_all_like(control)
+        assert_equal(test, control)
+        #
+
+
 class TestAverage(TestCase):
     "Several tests of average. Why so many ? Good point..."
     def test_testAverage1(self):

Modified: branches/dynamic_cpu_configuration/numpy/ma/testutils.py
===================================================================
--- branches/dynamic_cpu_configuration/numpy/ma/testutils.py	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/numpy/ma/testutils.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -167,22 +167,24 @@
     """Asserts that a comparison relation between two masked arrays is satisfied
     elementwise."""
     # Fill the data first
-    xf = filled(x)
-    yf = filled(y)
+#    xf = filled(x)
+#    yf = filled(y)
     # Allocate a common mask and refill
     m = mask_or(getmask(x), getmask(y))
-    x = masked_array(xf, copy=False, mask=m)
-    y = masked_array(yf, copy=False, mask=m)
+    x = masked_array(x, copy=False, mask=m, subok=False)
+    y = masked_array(y, copy=False, mask=m, subok=False)
     if ((x is masked) and not (y is masked)) or \
         ((y is masked) and not (x is masked)):
         msg = build_err_msg([x, y], err_msg=err_msg, verbose=verbose,
                             header=header, names=('x', 'y'))
         raise ValueError(msg)
     # OK, now run the basic tests on filled versions
+    comparison = getattr(np, comparison.__name__, lambda x,y: True)
     return utils.assert_array_compare(comparison,
-                                x.filled(fill_value), y.filled(fill_value),
-                                err_msg=err_msg,
-                                verbose=verbose, header=header)
+                                      x.filled(fill_value),
+                                      y.filled(fill_value),
+                                      err_msg=err_msg,
+                                      verbose=verbose, header=header)
 
 
 def assert_array_equal(x, y, err_msg='', verbose=True):

Modified: branches/dynamic_cpu_configuration/setup.py
===================================================================
--- branches/dynamic_cpu_configuration/setup.py	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/setup.py	2008-12-22 13:23:03 UTC (rev 6188)
@@ -44,6 +44,14 @@
 # a lot more robust than what was previously being used.
 __builtin__.__NUMPY_SETUP__ = True
 
+def setup_doc_files(configuration):
+    # Add doc sources
+    configuration.add_data_dir("doc/release")
+    configuration.add_data_dir("doc/source")
+    configuration.add_data_dir("doc/sphinxext")
+    configuration.add_data_files(("doc/Makefile"), ("doc/postprocess.py"))
+
+
 def configuration(parent_package='',top_path=None):
     from numpy.distutils.misc_util import Configuration
 
@@ -61,6 +69,8 @@
 
     config.get_version('numpy/version.py') # sets config.version
 
+    setup_doc_files(config)
+
     return config
 
 def setup_package():

Modified: branches/dynamic_cpu_configuration/tools/win32build/nsis_scripts/numpy-superinstaller.nsi.in
===================================================================
--- branches/dynamic_cpu_configuration/tools/win32build/nsis_scripts/numpy-superinstaller.nsi.in	2008-12-22 10:05:00 UTC (rev 6187)
+++ branches/dynamic_cpu_configuration/tools/win32build/nsis_scripts/numpy-superinstaller.nsi.in	2008-12-22 13:23:03 UTC (rev 6188)
@@ -6,6 +6,11 @@
 ;SetCompress off ; Useful to disable compression under development
 SetCompressor /Solid LZMA ; Useful to disable compression under development
 
+; Include FileFunc for command line parsing options
+!include "FileFunc.nsh"
+!insertmacro GetParameters
+!insertmacro GetOptions
+
 ;--------------------------------
 ;General
 
@@ -46,7 +51,35 @@
 Var HasSSE2
 Var HasSSE3
 Var CPUSSE
+Var option_arch
 
+Function .onInit
+        ; Get parameters
+        var /GLOBAL cmdLineParams
+        Push $R0
+
+        ${GetParameters} $cmdLineParams
+
+        ; XXX; How to get a console output help ? GUI seems useless when using
+        ; command line help...
+        ; ; /? param (help)
+        ; ClearErrors
+        ; ${GetOptions} $cmdLineParams '/?' $R0
+        ; IfErrors +3 0
+        ; MessageBox MB_OK "list all command line options here!"
+        ; Abort
+
+        Pop $R0
+
+        ; Initialise options
+        StrCpy $option_arch 'native'
+
+        ; Parse Parameters
+        Push $R0
+        Call parseParameters
+        Pop $R0
+FunctionEnd
+
 Section "Core" SecCore
 
         ;SectionIn RO
@@ -89,6 +122,28 @@
 
         ClearErrors
 
+        ${Switch} $option_arch
+                ${Case} "native"
+                DetailPrint '"native install (arch value: $option_arch)"'
+                ${Break}
+                ${Case} "nosse"
+                DetailPrint '"nosse install (arch value: $option_arch)"'
+                StrCpy $CPUSSE "0"
+                ${Break}
+                ${Case} "sse2"
+                DetailPrint '"sse2 install (arch value: $option_arch)"'
+                StrCpy $CPUSSE "2"
+                ${Break}
+                ${Case} "sse3"
+                DetailPrint '"sse3 install (arch value: $option_arch)"'
+                StrCpy $CPUSSE "3"
+                ${Break}
+                ${Default}
+                MessageBox MB_OK "option /arch $option_arch not understood: only native, nosse, sse2 and sse3 are valid."
+                Abort
+                ${Break}
+        ${EndSwitch}
+
         ; Install files conditionaly on detected cpu
         ${Switch} $CPUSSE
                 ${Case} "3"
@@ -119,3 +174,10 @@
         done:
 
 SectionEnd
+
+Function parseParameters
+    ; /arch option
+    ${GetOptions} $cmdLineParams '/arch' $R0
+    IfErrors +2 0
+    StrCpy $option_arch $R0
+FunctionEnd




More information about the Numpy-svn mailing list